Discussion:
GPT Table broken on a Raid1
Günther J. Niederwimmer
2012-09-19 11:03:44 UTC
Permalink
Hello,

can any tell me, why the GPT Table is broken by a linux installation

I have a dual boot system, the GPT Table is created with Win "diskpart"=
This=20
is a (U)EFI installation.

With linux kernel 3.5.x, I have always this message in the log

Sep 17 09:54:27 techz kernel: [ 1.204681] GPT:Primary header thinks =
Alt.=20
header is not at the end of the disk.
Sep 17 09:54:27 techz kernel: [ 1.204685] GPT:625137663 !=3D 6251424=
47
Sep 17 09:54:27 techz kernel: [ 1.204687] GPT:Alternate GPT header n=
ot at=20
the end of the disk.
Sep 17 09:54:27 techz kernel: [ 1.204689] GPT:625137663 !=3D 6251424=
47
Sep 17 09:54:27 techz kernel: [ 1.204691] GPT: Use GNU Parted to cor=
rect=20
GPT errors.
Sep 17 09:54:27 techz kernel: [ 1.204710] sdb: sdb1 sdb2 sdb3 sdb4 =
sdb5=20
sdb6
Sep 17 09:54:27 techz kernel: [ 1.205613] sd 1:0:0:0: [sdb] Attached=
SCSI=20
disk
Sep 17 09:54:27 techz kernel: [ 1.212374] GPT:Primary header thinks =
Alt.=20
header is not at the end of the disk.
Sep 17 09:54:27 techz kernel: [ 1.212377] GPT:625137663 !=3D 6251424=
47
Sep 17 09:54:27 techz kernel: [ 1.212379] GPT:Alternate GPT header n=
ot at=20
the end of the disk.
Sep 17 09:54:27 techz kernel: [ 1.212381] GPT:625137663 !=3D 6251424=
47
Sep 17 09:54:27 techz kernel: [ 1.212382] GPT: Use GNU Parted to cor=
rect=20
GPT errors.
Sep 17 09:54:27 techz kernel: [ 1.212404] sda: sda1 sda2 sda3 sda4 =
sda5=20
sda6


When I like to repair this with parted the system is not booting and th=
e Raid1=20
is destroyed.=20

--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-20 02:39:55 UTC
Permalink
Post by Günther J. Niederwimmer
Hello,
=20
can any tell me, why the GPT Table is broken by a linux installation
What format superblock? I will guess it's 1.0 which stores the superblo=
ck at the end of the disk and may be stepping on the secondary GPT head=
er. And then when you repair the GPT, you're wiping out part of the md =
superblock so it breaks your RAID.

It's confusing that the GPT table is created with diskpart, instead of =
parted.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-20 11:05:54 UTC
Permalink
Hello,
Post by Günther J. Niederwimmer
Hello,
=20
can any tell me, why the GPT Table is broken by a linux installatio=
n
=20
What format superblock? I will guess it's 1.0 which stores the superb=
lock at
the end of the disk and may be stepping on the secondary GPT header. =
And
then when you repair the GPT, you're wiping out part of the md superb=
lock
so it breaks your RAID.
=20
It's confusing that the GPT table is created with diskpart, instead o=
f
parted.
this is "normal" I install "windows 7" first ;). but the problem with t=
he=20
wrong GPT Table Message I have also on a Raid10 with "Intel Matrix Stor=
age"=20
and dmraid installation.

The Question is, work parted correct with Raid GPT Table?
--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-20 17:34:12 UTC
Permalink
=20
this is "normal" I install "windows 7" first ;). but the problem with=
the=20
wrong GPT Table Message I have also on a Raid10 with "Intel Matrix St=
orage"=20
and dmraid installation.
=20
=20
OK you didn't answer my question about what mdadm metadata version your=
md RAID is using. If it's 1.0 instead of 1.2 that could be the culprit=
=2E
The Question is, work parted correct with Raid GPT Table?
Could be a bug in parted 2.3, which is old. Try using parted 3.0.

Another question is, do you really think it's a good idea to be mixing =
different RAID implementations on the same physical devices, which also=
appear to contain boot volumes?

I think one of your RAID implementations is stepping on the GPT seconda=
ry header. I've seen this before in a BIOS "fakeraid" implementation. S=
o I would look at all of your RAID implementations, make sure they're u=
sing the latest versions.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-21 11:42:21 UTC
Permalink
Hello Chris,
this is "normal" I install "windows 7" first ;). but the problem wi=
th the
wrong GPT Table Message I have also on a Raid10 with "Intel Matrix
Storage"
and dmraid installation.
=20
OK you didn't answer my question about what mdadm metadata version yo=
ur md
RAID is using. If it's 1.0 instead of 1.2 that could be the culprit.
The Question is, work parted correct with Raid GPT Table?
Thank you for the answer.

excuse ;).

OK.

I have=20
mdadm-3.2.5-3.7.1
parted 2.4.-24.2.2

on the computer.
=20
Could be a bug in parted 2.3, which is old. Try using parted 3.0.
I will search ;)
Another question is, do you really think it's a good idea to be mixin=
g
different RAID implementations on the same physical devices, which al=
so
appear to contain boot volumes?
My installing is, win7 on the Raid1 GPT EFI the Raid1 have Partitions, =
5 for=20
NTFS and one ext4 for the "/Home" Direktory

My Linux is on a SSD, SSD is also GPT EFI formated.
=20
I think one of your RAID implementations is stepping on the GPT secon=
dary
header. I've seen this before in a BIOS "fakeraid" implementation. So=
I
would look at all of your RAID implementations, make sure they're usi=
ng the
latest versions.
Yes, it is a Intel Matrix Storage System, the Board is a DX58SO2 with l=
ast=20
Bios & Firmware for the "fakeraid" ;)

my question is, win can work with no Problem on the "Fakeraid" and a G=
PT=20
Table. Could it be, that the kernel or / and parted have a Problem with=
the=20
GPT Table on a Raid1 or a Raid10 Fakeraid ?

I have this Problem on all my three Systems (Intel), also on my Small 1=
9"=20
Intel Server.

--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-21 19:35:24 UTC
Permalink
Post by Günther J. Niederwimmer
=20
I have=20
mdadm-3.2.5-3.7.1
If you're making the RAID with that, it defaults to metadata version 1.=
2. But to be sure
mdadm -E /dev/mdX
Post by Günther J. Niederwimmer
parted 2.4.-24.2.2
=20
on the computer.
=20
Post by Chris Murphy
Could be a bug in parted 2.3, which is old. Try using parted 3.0.
=20
I will search ;)
Or track down gdisk (a.k.a. GPT fdisk), which I prefer to parted anyway=
=2E
Post by Günther J. Niederwimmer
my question is, win can work with no Problem on the "Fakeraid" and a=
GPT=20
Post by Günther J. Niederwimmer
Table. Could it be, that the kernel or / and parted have a Problem wi=
th the=20
Post by Günther J. Niederwimmer
GPT Table on a Raid1 or a Raid10 Fakeraid ?
Just because Windows works with fakeraid doesn't mean it's a linux kern=
el problem. It's clearly not finding the secondary partition header whe=
re the primary one says it should be. So something is moving it, or ste=
pping on it.

What linux distribution and version are you using? The fact you've got =
an older parted makes me wonder if the distribution predates the fix fo=
r dmraid and GPT, which is to use kpartx for activating partitions.

https://bugs.launchpad.net/ubuntu/+bug/777056

Another possibility is that Windows 7 is moving it, and is perfectly OK=
with the move, but then that breaks everything else.
http://ubuntuforums.org/showpost.php?p=3D10449114&postcount=3D12



Chris

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-21 22:43:52 UTC
Permalink
If you're making the RAID with that, it defaults to metadata version =
1.2. But to be sure
mdadm -E /dev/mdX
Scratch that. I was confused. Try these instead:

mdadm -=96detail-platform
mdadm =96D /dev/md/imsm
mdadm =96E /dev/sdX

Anyway, I'm suspicious that you've got either your SATA controller also=
with RAID enabled, or you're also using dmraid and it's conflicting wi=
th the md driver. Or you've misconfigured mdadm for imsm. The result is=
the secondary GPT is getting squished. I think you should read this do=
cument, as it proposes creating a container first, then RAID within tha=
t. If you're creating the RAID entirely from within Windows this may no=
t be what it does.

http://download.intel.com/design/intarch/PAPERS/326024.pdf


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-22 08:37:19 UTC
Permalink
Hello Chris,
If you're making the RAID with that, it defaults to metadata versio=
n 1.2.
But to be sure mdadm -E /dev/mdX
=20
This is the output from a openSUSE 12.2 (DX58SO2)
=20
mdadm -=E2=80=93detail-platform
Platform : Intel(R) Matrix Storage Manager
Version : 11.0.0.1339
RAID Levels : raid0 raid1 raid10 raid5
Chunk Sizes : 4k 8k 16k 32k 64k 128k
2TB volumes : supported
2TB disks : supported
Max Disks : 6
Max Volumes : 2 per array, 4 per controller
I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA)
mdadm =E2=80=93D /dev/md/imsm
On my system I have a /imsm0

/dev/md/imsm0:
Version : imsm
Raid Level : container
Total Devices : 2

Working Devices : 2


UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Member Arrays : /dev/md/Volume0

Number Major Minor RaidDevice

0 8 0 - /dev/sda
1 8 16 - /dev/sdb
mdadm =E2=80=93E /dev/sdX
/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : e3958f4b
Family : e3958f4b
Generation : 00013417
Attributes : All supported
UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Checksum : 3e9e527c correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk00 Serial : 6QF4WDE3
State : active
Id : 00000000
Usable Size : 625137928 (298.09 GiB 320.07 GB)

[Volume0]:
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 625137664 (298.09 GiB 320.07 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : dirty

Disk01 Serial : 6QF4WF5Z
State : active
Id : 00000001
Usable Size : 625137928 (298.09 GiB 320.07 GB)


/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : e3958f4b
Family : e3958f4b
Generation : 0001342f
Attributes : All supported
UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Checksum : 3e9e5294 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk01 Serial : 6QF4WF5Z
State : active
Id : 00000001
Usable Size : 625137928 (298.09 GiB 320.07 GB)

[Volume0]:
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Array Size : 625137664 (298.09 GiB 320.07 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : dirty

Disk00 Serial : 6QF4WDE3
State : active
Id : 00000000
Usable Size : 625137928 (298.09 GiB 320.07 GB)

=20
Anyway, I'm suspicious that you've got either your SATA controller al=
so with
RAID enabled, or you're also using dmraid and it's conflicting with t=
he md
driver. Or you've misconfigured mdadm for imsm. The result is the sec=
ondary
GPT is getting squished. I think you should read this document, as it
proposes creating a container first, then RAID within that. If you're
creating the RAID entirely from within Windows this may not be what i=
t
does.
=20
http://download.intel.com/design/intarch/PAPERS/326024.pdf
I working on this, to read all.

I installed with YaST2 and I hope it is only mdadm not all together ;).

But I mean I have read in the changelog from parted 3.1, it is a Raid(1=
) GPT=20
Error Bug corrected ?

I hope I can create a working mdadm 3.1 package witch is working with o=
penSUSE=20
12.2. (I am not a programmer) =20

Thanks for the hint to fedora, for the tool gdisk I don't found it befo=
re.=20
--=20
mit freundlichen Gr=C3=BC=C3=9Fen / best Regards.

G=C3=BCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-22 18:30:28 UTC
Permalink
But I mean I have read in the changelog from parted 3.1, it is a Raid=
(1) GPT=20
Error Bug corrected ?
I don't think this is related to parted, honestly. Did you partition sd=
a and sdb before you created the RAID container? I think the problem is=
that the IMSM metadata has stepped on the GPT secondary partition at t=
he end of the sda and sdb, that's what you original post indicates.=20

I could be wrong but it seems like the problem is that sda and sdb shou=
ldn't even have a GPT in the first place, they should just be treated a=
s raw physical devices for Intel Raid to take over, and you'd only crea=
te a GPT for the virtual device (the array) within. If correct, the way=
to get rid of the kernel GPT error messages would be to remove the unn=
ecessary GPT structures from sda and sdb, leaving only the IMSM metadat=
a intact.


Chris Murphy

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-23 06:47:01 UTC
Permalink
Hello,
But I mean I have read in the changelog from parted 3.1, it is a Ra=
id(1)
GPT Error Bug corrected ?
=20
I don't think this is related to parted, honestly. Did you partition =
sda and
sdb before you created the RAID container? I think the problem is tha=
t the
IMSM metadata has stepped on the GPT secondary partition at the end o=
f the
sda and sdb, that's what you original post indicates.
=20
I could be wrong but it seems like the problem is that sda and sdb sh=
ouldn't
even have a GPT in the first place, they should just be treated as ra=
w
physical devices for Intel Raid to take over, and you'd only create a=
GPT
for the virtual device (the array) within. If correct, the way to get=
rid
of the kernel GPT error messages would be to remove the unnecessary G=
PT
structures from sda and sdb, leaving only the IMSM metadata intact.
My steps to install a computer with dualboot, but I can also install Li=
nux=20
alone with the same result.=20

I create the Array's with the on board BIOS run indexing the I run a To=
ol most=20
"diskpart" (dualboot) to create a GPT Table.

Install method is EFI.

afterward I install win7 =20

The next step is to install a Linux (most openSUSE)

or=20

Install Linux on a BIOS created Array (Raid1 or Raid10) for a Server

create a GPT Table

the result are the same.

OK, the system is running but never start parted or a other tool and sa=
y to=20
anything yes, then you have a destroyed system :(.=20


--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-23 07:17:43 UTC
Permalink
OK, the system is running but never start parted or a other tool and =
say to=20
anything yes, then you have a destroyed system :(
This will not destroy anything

parted -l

or

gdisk -l /dev/sda
gdisk -l /dev/sdb

I think those disks have latent GPTs on them and that's the source of t=
he (likely ignorable) problem.


Chris Murphy

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-24 07:28:10 UTC
Permalink
Hello Chris,
Post by Chris Murphy
OK, the system is running but never start parted or a other tool an=
d say
Post by Chris Murphy
to
anything yes, then you have a destroyed system :(
=20
This will not destroy anything
=20
parted -l
Error: The backup GPT table is not at the end of the disk, as it should=
be.
This might mean that another operating system believes the disk is smal=
ler.
=46ix, by moving the backup to the end (and removing the old backup)?
=46ix/Ignore/Cancel? =20

Now I know, do NOT Fix ;).

Error: The backup GPT table is not at the end of the disk, as it should=
be.
This might mean that another operating system believes the disk is smal=
ler.
=46ix, by moving the backup to the end (and removing the old backup)?
=46ix/Ignore/Cancel? i =
=20
Warning: Not all of the space available to /dev/sda appears to be used,=
you=20
can fix the GPT to use all
of the space (an extra 4784 blocks) or continue with the current settin=
g?=20
=46ix/Ignore? i =
=20
Model: ATA ST3320620AS (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr

Number Start End Size File system Name =
=20
=46lags
1 1049kB 106MB 105MB fat32 EFI system partition =
boot
2 106MB 240MB 134MB Microsoft reserved partitio=
n =20
msftres
3 240MB 105GB 105GB ntfs Basic data partition
4 105GB 190GB 84,9GB ntfs Basic data partition
5 190GB 212GB 22,0GB ntfs Basic data partition
6 212GB 268GB 55,6GB ext4 Basic data partition


Error: The backup GPT table is not at the end of the disk, as it should=
be. =20
This might mean that
another operating system believes the disk is smaller. Fix, by moving =
the=20
backup to the end (and
removing the old backup)?
=46ix/Ignore/Cancel? I =
=20
Model: ATA ST3320620AS (scsi)
Disk /dev/sdb: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr

Number Start End Size File system Name =
=20
=46lags
1 1049kB 106MB 105MB fat32 EFI system partition =
boot
2 106MB 240MB 134MB Microsoft reserved partitio=
n =20
msftres
3 240MB 105GB 105GB ntfs Basic data partition
4 105GB 190GB 84,9GB ntfs Basic data partition
5 190GB 212GB 22,0GB ntfs Basic data partition
6 212GB 268GB 55,6GB ext4 Basic data partition


Model: ATA ST3320613AS (scsi)
Disk /dev/sdc: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 320GB 320GB primary ext4 type=3D83


Model: ATA ST3320613AS (scsi)
Disk /dev/sdd: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 4392MB 4391MB primary linux-swap(v1) type=3D82
2 4392MB 320GB 316GB primary ext4 type=3D83


Model: ATA OCZ-VERTEX4 (scsi)
Disk /dev/sde: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1049kB 181MB 180MB fat32 primary boot
2 181MB 116GB 116GB primary


Model: Linux Software RAID Array (md)
Disk /dev/md126: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr

Number Start End Size File system Name =
=20
=46lags
1 1049kB 106MB 105MB fat32 EFI system partition =
boot
2 106MB 240MB 134MB Microsoft reserved partitio=
n =20
msftres
3 240MB 105GB 105GB ntfs Basic data partition
4 105GB 190GB 84,9GB ntfs Basic data partition
5 190GB 212GB 22,0GB ntfs Basic data partition
6 212GB 268GB 55,6GB ext4 Basic data partition
Post by Chris Murphy
or
=20
gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

=46ound valid GPT with protective MBR; using GPT.
Disk /dev/sda: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): AD126626-066E-448E-ABA2-1ACFDCBA8326
Partition table holds up to 128 entries
=46irst usable sector is 34, last usable sector is 625137630
Partitions will be aligned on 2048-sector boundaries
Total free space is 102406077 sectors (48.8 GiB)

Number Start (sector) End (sector) Size Code Name
1 2048 206847 100.0 MiB EF00 EFI system par=
tition
2 206848 468991 128.0 MiB 0C01 Microsoft rese=
rved=20
part
3 468992 205297663 97.7 GiB 0700 Basic data par=
tition
4 205297664 371181567 79.1 GiB 0700 Basic data par=
tition
5 371181568 414189567 20.5 GiB 0700 Basic data par=
tition
6 414189568 522733567 51.8 GiB 0700 Basic data par=
tition
Post by Chris Murphy
gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

=46ound valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 625142448 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): AD126626-066E-448E-ABA2-1ACFDCBA8326
Partition table holds up to 128 entries
=46irst usable sector is 34, last usable sector is 625137630
Partitions will be aligned on 2048-sector boundaries
Total free space is 102406077 sectors (48.8 GiB)

Number Start (sector) End (sector) Size Code Name
1 2048 206847 100.0 MiB EF00 EFI system par=
tition
2 206848 468991 128.0 MiB 0C01 Microsoft rese=
rved=20
part
3 468992 205297663 97.7 GiB 0700 Basic data par=
tition
4 205297664 371181567 79.1 GiB 0700 Basic data par=
tition
5 371181568 414189567 20.5 GiB 0700 Basic data par=
tition
6 414189568 522733567 51.8 GiB 0700 Basic data par=
tition
Post by Chris Murphy
gdisk -l /dev/md126
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

=46ound valid GPT with protective MBR; using GPT.
Disk /dev/md126: 625137664 sectors, 298.1 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): AD126626-066E-448E-ABA2-1ACFDCBA8326
Partition table holds up to 128 entries
=46irst usable sector is 34, last usable sector is 625137630
Partitions will be aligned on 2048-sector boundaries
Total free space is 102406077 sectors (48.8 GiB)

Number Start (sector) End (sector) Size Code Name
1 2048 206847 100.0 MiB EF00 EFI system par=
tition
2 206848 468991 128.0 MiB 0C01 Microsoft rese=
rved=20
part
3 468992 205297663 97.7 GiB 0700 Basic data par=
tition
4 205297664 371181567 79.1 GiB 0700 Basic data par=
tition
5 371181568 414189567 20.5 GiB 0700 Basic data par=
tition
6 414189568 522733567 51.8 GiB 0700 Basic data par=
tition
Post by Chris Murphy
I think those disks have latent GPTs on them and that's the source of=
the
Post by Chris Murphy
(likely ignorable) problem.
;)=20

I mean this is a little to high for me :(.


But I have one Question more.

Is this also the same Problem with the GPT Table.

I make a Test installation before, with dmraid for the Raid1. The /home=
=20
Directory on the Raid1, all other on the SSD.

After the Installation the most time the System have a Problem to mount=
the=20
"/home" or a other Partition on the Raid1 and fall down in the repair m=
ode.

But the good thing from this, Grub2 found my Windows 7 Installation on =
the=20
Raid1 and Make a Start Entry, with mdadm this is not working, Grub2 don=
't=20
found the windows?

--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-24 17:21:35 UTC
Permalink
Post by Günther J. Niederwimmer
=20
Model: ATA ST3320620AS (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr
Interesting. Parted thinks this disk has a hybrid MBR. G=FCnther, what =
is the result from:

fdisk -l /dev/sda

?

And one more, I'd like to see the result from:
mount

I want to make sure nothing is directly mounting sda or sdb.
Post by Günther J. Niederwimmer
But the good thing from this, Grub2 found my Windows 7 Installation o=
n the=20
Post by Günther J. Niederwimmer
Raid1 and Make a Start Entry, with mdadm this is not working, Grub2 d=
on't=20
Post by Günther J. Niederwimmer
found the windows?
Let's do one thing at a time. We don't even really have the basics cove=
red yet. I am not able to reproduce your problem where the kernel and p=
arted complain about the alternate GPT header not being at the end of t=
he disk. I'm using kernel 3.5.3-1.fc17 and mdadm 3.2.5.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-24 19:06:51 UTC
Permalink
Hello Chris,
Post by Günther J. Niederwimmer
Model: ATA ST3320620AS (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt_sync_mbr
=20
Interesting. Parted thinks this disk has a hybrid MBR. G=FCnther, wha=
t is the
=20
fdisk -l /dev/sda
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fd=
isk=20
doesn't support GPT. Use GN Parted.


Disk /dev/sda: 320.1 GB, 320072933376 bytes
256 K=F6pfe, 63 Sektoren/Spur, 38761 Zylinder, zusammen 625142448 Sekto=
ren
Einheiten =3D Sektoren von 1 =D7 512 =3D 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Ger=E4t boot. Anfang Ende Bl=F6cke Id System
/dev/sda1 1 4294967295 2147483647+ ee GPT
mount
devtmpfs on /dev type devtmpfs=20
(rw,relatime,size=3D12354704k,nr_inodes=3D3088676,mode=3D755)
tmpfs on /dev/shm type tmpfs (rw,relatime)
tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,mode=3D755)
devpts on /dev/pts type devpts (rw,relatime,gid=3D5,mode=3D620,ptmxmode=
=3D000)
/dev/sde2 on / type btrfs (rw,relatime,ssd,space_cache)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,mod=
e=3D755)
cgroup on /sys/fs/cgroup/systemd type cgroup=20
(rw,nosuid,nodev,noexec,relatime,release_agent=3D/lib/systemd/systemd-c=
groups-
agent,name=3Dsystemd)
cgroup on /sys/fs/cgroup/cpuset type cgroup=20
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup=20
(rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup=20
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup=20
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup=20
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup=20
(rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup=20
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup=20
(rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs=20
(rw,relatime,fd=3D31,pgrp=3D1,timeout=3D300,minproto=3D5,maxproto=3D5,d=
irect)
tmpfs on /var/lock type tmpfs (rw,nosuid,nodev,relatime,mode=3D755)
tmpfs on /var/run type tmpfs (rw,nosuid,nodev,relatime,mode=3D755)
tmpfs on /media type tmpfs (rw,nosuid,nodev,noexec,relatime,mode=3D755)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
/dev/sde1 on /boot/efi type vfat=20
(rw,relatime,fmask=3D0002,dmask=3D0002,allow_utime=3D0020,codepage=3Dcp=
437,iocharset=3Diso8859-1,shortname=3Dmixed,utf8,errors=3Dremount-
ro)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sdd2 on /data1 type ext4 (rw,relatime,data=3Dordered)
/dev/sdc1 on /data type ext4 (rw,relatime,data=3Dordered)
/dev/md126p4 on /windows/D type fuseblk=20
(rw,nosuid,nodev,noexec,relatime,user_id=3D0,group_id=3D0,default_permi=
ssions,allow_other,blksize=3D4096)
/dev/md126p5 on /windows/E type fuseblk=20
(rw,nosuid,nodev,noexec,relatime,user_id=3D0,group_id=3D0,default_permi=
ssions,allow_other,blksize=3D4096)
/dev/md126p6 on /home type ext4 (rw,relatime,data=3Dordered)
/dev/md126p3 on /windows/C type fuseblk=20
(rw,nosuid,nodev,noexec,relatime,user_id=3D0,group_id=3D0,default_permi=
ssions,allow_other,blksize=3D4096)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
bbs:/data1/ on /datas1 type nfs=20
(rw,relatime,vers=3D3,rsize=3D262144,wsize=3D262144,namlen=3D255,hard,p=
roto=3Dtcp,timeo=3D600,retrans=3D2,sec=3Dsys,mountaddr=3D192.168.100.20=
0,mountvers=3D3,mountport=3D1516,mountproto=3Dudp,local_lock=3Dnone,add=
r=3D192.168.100.200)
bbs:/data2/ on /datas2 type nfs=20
(rw,relatime,vers=3D3,rsize=3D262144,wsize=3D262144,namlen=3D255,hard,p=
roto=3Dtcp,timeo=3D600,retrans=3D2,sec=3Dsys,mountaddr=3D192.168.100.20=
0,mountvers=3D3,mountport=3D1516,mountproto=3Dudp,local_lock=3Dnone,add=
r=3D192.168.100.200)
none on /var/lib/ntp/proc type proc (ro,nosuid,nodev,relatime)
gvfs-fuse-daemon on /run/user/gjn/gvfs type fuse.gvfs-fuse-daemon=20
(rw,nosuid,nodev,relatime,user_id=3D1000,group_id=3D100
=20
I want to make sure nothing is directly mounting sda or sdb.
OK
=20
Post by Günther J. Niederwimmer
But the good thing from this, Grub2 found my Windows 7 Installation=
on the
Post by Günther J. Niederwimmer
Raid1 and Make a Start Entry, with mdadm this is not working, Grub2=
don't
Post by Günther J. Niederwimmer
found the windows?
=20
Let's do one thing at a time. We don't even really have the basics co=
vered
yet. I am not able to reproduce your problem where the kernel and par=
ted
complain about the alternate GPT header not being at the end of the d=
isk.
I'm using kernel 3.5.3-1.fc17 and mdadm 3.2.5.
OK Chris ;)=20

Thanks for the help,
--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-24 20:18:27 UTC
Permalink
Post by Günther J. Niederwimmer
=20
Disk /dev/sda: 320.1 GB, 320072933376 bytes
256 K=F6pfe, 63 Sektoren/Spur, 38761 Zylinder, zusammen 625142448 Sek=
toren
Post by Günther J. Niederwimmer
Einheiten =3D Sektoren von 1 =D7 512 =3D 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
=20
Ger=E4t boot. Anfang Ende Bl=F6cke Id System
/dev/sda1 1 4294967295 2147483647+ ee GPT
Well that's screwy. But at least the PMBR is protecting all sectors (an=
d quite a bit more that don't exist) of this disk.

Last time we checked, the array state was dirty. Is that still the case=
? Report the results from these two:

mdadm -D /dev/md126

mdadm -Ds


Chris Murphy


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-24 21:06:32 UTC
Permalink
Hello Chris,
Post by Günther J. Niederwimmer
Disk /dev/sda: 320.1 GB, 320072933376 bytes
256 K=F6pfe, 63 Sektoren/Spur, 38761 Zylinder, zusammen 625142448 S=
ektoren
Post by Günther J. Niederwimmer
Einheiten =3D Sektoren von 1 =D7 512 =3D 512 Bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
=20
Ger=E4t boot. Anfang Ende Bl=F6cke Id System
=20
/dev/sda1 1 4294967295 2147483647+ ee GPT
=20
Well that's screwy. But at least the PMBR is protecting all sectors (=
and
quite a bit more that don't exist) of this disk.
=20
Last time we checked, the array state was dirty. Is that still the ca=
se?
Yes Sir ;).
=20
mdadm -D /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid1
Array Size : 312568832 (298.09 GiB 320.07 GB)
Used Dev Size : 312568964 (298.09 GiB 320.07 GB)
Raid Devices : 2
Total Devices : 2

State : active=20
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0


UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
Number Major Minor RaidDevice State
1 8 0 0 active sync /dev/sda
0 8 16 1 active sync /dev/sdb

=20
mdadm -Ds
ARRAY /dev/md/imsm0 metadata=3Dimsm UUID=3D363f146f:e7f29dc8:f05996c3:5=
77ead6a
ARRAY /dev/md/Volume0 container=3D/dev/md/imsm0 member=3D0=20
UUID=3Dec120401:b6ed52e6:3814fac4:48fcf4fc


Question: Why create mdadm also a md127, I found this now ?

/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 2

Working Devices : 2


UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Member Arrays : /dev/md/Volume0

Number Major Minor RaidDevice

0 8 16 - /dev/sdb
1 8 0 - /dev/sda



--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-24 21:12:22 UTC
Permalink
Post by Günther J. Niederwimmer
=20
Post by Chris Murphy
mdadm -D /dev/md126
=20
Container : /dev/md/imsm0, member 0
Raid Level : raid1
Array Size : 312568832 (298.09 GiB 320.07 GB)
Used Dev Size : 312568964 (298.09 GiB 320.07 GB)
Raid Devices : 2
Total Devices : 2
=20
State : active=20
Probably clean. Try:

mdadm -E /dev/sda
Post by Günther J. Niederwimmer
=20
ARRAY /dev/md/imsm0 metadata=3Dimsm UUID=3D363f146f:e7f29dc8:f05996c3=
:577ead6a
Post by Günther J. Niederwimmer
ARRAY /dev/md/Volume0 container=3D/dev/md/imsm0 member=3D0=20
UUID=3Dec120401:b6ed52e6:3814fac4:48fcf4fc
=20
=20
Question: Why create mdadm also a md127, I found this now ?
Kernel is mapping /dev/md/Volume0 to /dev/md126. It's doing the same on=
my system, although it's using md126. Not sure if that's controllable =
or not. Maybe someone else can answer it.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-25 08:27:52 UTC
Permalink
Hello Chris,
Post by Chris Murphy
Post by Günther J. Niederwimmer
Post by Chris Murphy
mdadm -D /dev/md126
=20
Container : /dev/md/imsm0, member 0
=20
Raid Level : raid1
Array Size : 312568832 (298.09 GiB 320.07 GB)
=20
Used Dev Size : 312568964 (298.09 GiB 320.07 GB)
=20
Raid Devices : 2
=20
Total Devices : 2
=20
State : active
=20
=20
mdadm -E /dev/sda
/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : e3958f4b
Family : e3958f4b
Generation : 00017ffe
Attributes : All supported
UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Checksum : 3e9e9e6b correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk00 Serial : 6QF4WDE3
State : active
Id : 00000000
Usable Size : 625137928 (298.09 GiB 320.07 GB)

[Volume0]:
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 625137664 (298.09 GiB 320.07 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : dirty

Disk01 Serial : 6QF4WF5Z
State : active
Id : 00000001
Usable Size : 625137928 (298.09 GiB 320.07 GB)


=20
Post by Chris Murphy
Post by Günther J. Niederwimmer
ARRAY /dev/md/imsm0 metadata=3Dimsm UUID=3D363f146f:e7f29dc8:f05996=
c3:577ead6a
Post by Chris Murphy
Post by Günther J. Niederwimmer
ARRAY /dev/md/Volume0 container=3D/dev/md/imsm0 member=3D0
UUID=3Dec120401:b6ed52e6:3814fac4:48fcf4fc
=20
=20
Question: Why create mdadm also a md127, I found this now ?
=20
Kernel is mapping /dev/md/Volume0 to /dev/md126. It's doing the same =
on my
Post by Chris Murphy
system, although it's using md126. Not sure if that's controllable or=
not.
Post by Chris Murphy
Maybe someone else can answer it.
No Problem, but i tell it to you, for me it is mystery. ;)
=20
--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
John Robinson
2012-09-25 09:28:34 UTC
Permalink
[...]
Post by Günther J. Niederwimmer
Post by Günther J. Niederwimmer
Question: Why create mdadm also a md127, I found this now ?
Kernel is mapping /dev/md/Volume0 to /dev/md126. It's doing the same=
on my
Post by Günther J. Niederwimmer
system, although it's using md126. Not sure if that's controllable o=
r not.
Post by Günther J. Niederwimmer
Maybe someone else can answer it.
No Problem, but i tell it to you, for me it is mystery. ;)
It's the way IMSM works. You have md127 which is a "container",=20
essentially spanning all the discs concerned. That container holds one=20
or more RAID sets. You have md126 aka Volume0 as a RAID-1 set that fill=
s=20
the container, but you could have more than one RAID set, e.g. you coul=
d=20
have a RAID-1 as md126 and a RAID-10 as md125.

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-25 16:55:39 UTC
Permalink
Post by Günther J. Niederwimmer
=20
Dirty State : dirty
Basically this is my only remaining concern, and while I don't know why=
it's dirty, I think it needs to be resolved if you care about having R=
AID 1 in the first place. My best guess is that neither md on Linux nor=
the IMSM driver on Windows have an unambiguous way to determine which =
disk is correct, which is why it hasn't just sync'd them. So you kinda =
have to pick one (?) and force a sync - again I'll have to defer to som=
eone else to answer that question but it's probably not ideal to just l=
eave it in a dirty state.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Günther J. Niederwimmer
2012-09-26 12:17:42 UTC
Permalink
Hello Chris,
Post by Günther J. Niederwimmer
Dirty State : dirty
=20
Basically this is my only remaining concern, and while I don't know w=
hy it's
dirty, I think it needs to be resolved if you care about having RAID =
1 in
the first place. My best guess is that neither md on Linux nor the IM=
SM
driver on Windows have an unambiguous way to determine which disk is
correct, which is why it hasn't just sync'd them. So you kinda have t=
o pick
one (?) and force a sync - again I'll have to defer to someone else t=
o
answer that question but it's probably not ideal to just leave it in =
a
dirty state.
=20
OK, I run the Intel Tool in windows two times with the last Tool I foun=
d.

The Tool don't found any Problem (?) and don't repair, but mdadm....

/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : e3958f4b
Family : e3958f4b
Generation : 00019ef2
Attributes : All supported
UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Checksum : 3e8b1bfe correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk00 Serial : 6QF4WDE3
State : active
Id : 00000000
Usable Size : 625137928 (298.09 GiB 320.07 GB)

[Volume0]:
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 625137664 (298.09 GiB 320.07 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean

Disk01 Serial : 6QF4WF5Z
State : active
Id : 00000001
Usable Size : 625137928 (298.09 GiB 320.07 GB)

/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.1.00
Orig Family : e3958f4b
Family : e3958f4b
Generation : 00019f47
Attributes : All supported
UUID : 363f146f:e7f29dc8:f05996c3:577ead6a
Checksum : 3e8c1c53 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1

Disk01 Serial : 6QF4WF5Z
State : active
Id : 00000001
Usable Size : 625137928 (298.09 GiB 320.07 GB)

[Volume0]:
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
RAID Level : 1
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Array Size : 625137664 (298.09 GiB 320.07 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 64 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : dirty
=B0=B0=B0=B0=B0=B0=B0

Disk00 Serial : 6QF4WDE3
State : active
Id : 00000000
Usable Size : 625137928 (298.09 GiB 320.07 GB)

=20
--=20
mit freundlichen Gr=FC=DFen / best Regards.

G=FCnther J. Niederwimmer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-26 19:33:49 UTC
Permalink
Post by Chris Murphy
=20
dirty state.
=20
OK, I run the Intel Tool in windows two times with the last Tool I fo=
und.
=20
The Tool don't found any Problem (?) and don't repair, but mdadm=85.
=20
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
Map State : normal
Dirty State : clean
=20
=20
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
Map State : normal
Dirty State : dirty
=B0=B0=B0=B0=B0=B0=B0
I don't understand this UI. Are there two Volume0's?=20

I can see how the dirty state would apply independently among physical =
devices /dev/sda and /dev/sdb. But the virtual device, the array volume=
, "Volume0" seems like it would have only one instance. So I don't unde=
rstand how it can be clean in one case and dirty in another.



Chris Murphy

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
NeilBrown
2012-09-27 02:43:04 UTC
Permalink
Post by Chris Murphy
Post by Chris Murphy
dirty state.
OK, I run the Intel Tool in windows two times with the last Tool I found.
The Tool don't found any Problem (?) and don't repair, but mdadm….
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
Map State : normal
Dirty State : clean
UUID : ec120401:b6ed52e6:3814fac4:48fcf4fc
Map State : normal
Dirty State : dirty
°°°°°°°
I don't understand this UI. Are there two Volume0's?
I can see how the dirty state would apply independently among physical devices /dev/sda and /dev/sdb. But the virtual device, the array volume, "Volume0" seems like it would have only one instance. So I don't understand how it can be clean in one case and dirty in another.
It just means that when looking at the metadata on /dev/sda, we see it marked
'clean', and when looking at the metadata on /dev/sdb, we see that it is
marked 'dirty'.
Possibly something wrote to Volume0 between these two events, so the volume
got marked 'dirty' so the write could happen. Wait a few seconds and it
should get marked 'clean' again.

Or possibly there is a bug somewhere.
I would open two windows. In one run
watch -d mdadm -E /dev/sda
and in the other run
watch -d mdadm -E /dev/sdb

then access the array, or maybe leave it alone, and see how the metadata
changes with time.

NeilBrown

John Robinson
2012-09-22 15:31:46 UTC
Permalink
Post by Chris Murphy
If you're making the RAID with that, it defaults to metadata version=
1.2. But to be sure
Post by Chris Murphy
mdadm -E /dev/mdX
mdadm -=96detail-platform
mdadm =96D /dev/md/imsm
mdadm =96E /dev/sdX
I don't think there's anything wrong here.

The kernel sees the whole discs, sda and sdb, and complains that the GP=
T=20
partition table looks wrong becase the second copy isn't at the end of=20
the discs. That's correct, at the end of the raw discs is the IMSM=20
metadata. Once you've assembled the IMSM array with mdadm, the partitio=
n=20
table inside /dev/md/Volume0 is correct.

You'd see the same thing with a native md device with metadata 0.90 or=20
1.0 made from whole discs and with a GPT partition table inside.

Don't try to change the partition tables on /dev/sda and sdb or you wil=
l=20
damage the IMSM metadata.

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-22 18:45:39 UTC
Permalink
Post by John Robinson
I don't think there's anything wrong here.
The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata.
OK so sda and sdb shouldn't have been partitioned in the first place, is what that tells me.
Post by John Robinson
Don't try to change the partition tables on /dev/sda and sdb or you will damage the IMSM metadata.
Sounds like either imsm metadata is either not well designed for GPT, or it was intended to be placed on an unpartitioned disk in the first place.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
NeilBrown
2012-09-22 21:58:43 UTC
Permalink
Post by Chris Murphy
Post by John Robinson
I don't think there's anything wrong here.
The kernel sees the whole discs, sda and sdb, and complains that the GPT partition table looks wrong becase the second copy isn't at the end of the discs. That's correct, at the end of the raw discs is the IMSM metadata.
OK so sda and sdb shouldn't have been partitioned in the first place, is what that tells me.
Post by John Robinson
Don't try to change the partition tables on /dev/sda and sdb or you will damage the IMSM metadata.
Sounds like either imsm metadata is either not well designed for GPT, or it was intended to be placed on an unpartitioned disk in the first place.
IMSM metadata is definitely intended to be placed on an un-partitioned disk.

The only real point of IMSM is to provide interoperability with other
implementations, whether the one for Windows (allowing dual-boot) or the one
in the bios (allowing boot-from-RAID5 etc).
Those other implementations use IMSM on the whole device, so it would be
pointless using mdadm/IMSM on partitions.

Note that I haven't really been following this thread, so I might have missed
the point. I'm really just responding to that last sentence.

NeilBrown
John Robinson
2012-09-22 22:07:06 UTC
Permalink
Post by John Robinson
I don't think there's anything wrong here.
The kernel sees the whole discs, sda and sdb, and complains that the=
GPT partition table looks wrong becase the second copy isn't at the en=
d of the discs. That's correct, at the end of the raw discs is the IMSM=
metadata.
OK so sda and sdb shouldn't have been partitioned in the first place,=
is what that tells me.

That's not what I'm saying. I'm saying that sda and sdb weren't=20
partitioned in the first place. I'm saying that when the Linux kernel=20
boots, and the AHCI driver starts, it sees the IMSM member discs as raw=
=20
discs, which get probed for partitions. The GPT partition probe spots=20
one of the copies of the GPT partition, but can't find the other one=20
because the IMSM metadata's there. Then later on, md starts, reads the=20
IMSM metadata, and presents the RAID set, including the GPT partition=20
table that the raw-disc probe already whinged about, but which in the=20
RAID set is correct.

The error messages that G=FCnther saw are a false positive and harmless=
-=20
or at least, harmless until someone starts telling him to go messing=20
around rewriting the contents of the raw drives underneath md by=20
deleting the misidentified partition tables, the effect of which is=20
likely to be to damage his partitions inside the IMSM array and/or=20
destroy the IMSM array metadata.
Post by John Robinson
Don't try to change the partition tables on /dev/sda and sdb or you =
will damage the IMSM metadata.
Sounds like either imsm metadata is either not well designed for GPT
I think I'd describe it was being designed so that individual discs fro=
m=20
RAID-1 mirrors can be read independently.

IMSM predates GPT anyway.
or it was intended to be placed on an unpartitioned disk in the first=
place.

It can't be anything else.

Cheers,

John.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-22 22:30:28 UTC
Permalink
Post by John Robinson
I don't think there's anything wrong here.
=20
The kernel sees the whole discs, sda and sdb, and complains that th=
e GPT partition table looks wrong becase the second copy isn't at the e=
nd of the discs. That's correct, at the end of the raw discs is the IMS=
M metadata.
=20
OK so sda and sdb shouldn't have been partitioned in the first place=
, is what that tells me.
=20
That's not what I'm saying. I'm saying that sda and sdb weren't parti=
tioned in the first place.

I disagree, it appears they were partitioned, but we'll see when the OP=
posts back the results from gdisk. The kernel messages in the very fir=
st post seems to imply it was partitioned (at least, at one time) or th=
ere wouldn't be an sdb1, sdb2, sdb3, etc.
Sep 17 09:54:27 techz kernel: [ 1.204710] sdb: sdb1 sdb2 sdb3 sdb=
4 sdb5=20
sdb6
I'm saying that when the Linux kernel boots, and the AHCI driver star=
ts, it sees the IMSM member discs as raw discs, which get probed for pa=
rtitions. The GPT partition probe spots one of the copies of the GPT pa=
rtition, but can't find the other one because the IMSM metadata's there=
=2E

The IMSM metadata will go at the end of the PHYSICAL disk. If you parti=
tion the virtual disk, i.e. the array, /dev/md/imsm0, then the GPT alte=
rnate header goes at the end of the virtual disk, NOT the end of the ph=
ysical disk.

Besides, if what you say is true, as soon as he GPT partitioned the arr=
ay, if the secondary GPT header stepped on IMSM metadata, then the arra=
y would instantly break. That's not what happened.=20
=20
=20
The error messages that G=FCnther saw are a false positive and harmle=
ss

I think so too, but are the result of the raw disks being previously pa=
rtitioned. Possibly even they are stale GPTs that should have been nuke=
d before getting started with IMSM RAID.
- or at least, harmless until someone starts telling him to go messin=
g around rewriting the contents of the raw drives underneath md by dele=
ting the misidentified partition tables, the effect of which is likely =
to be to damage his partitions inside the IMSM array and/or destroy the=
IMSM array metadata.

There's every reason to believe the primary GPT header on /dev/sda and =
/dev/sdb is a.) intact and b.) is in LBA 1 and c.) Cannot also include =
IMSM metadata. So removing that header would thus make obliterate the G=
PT on the physical disks and he'd stop getting the error message. If he=
's really bothered by what is in effect a harmless error message becaus=
e that (stale) GPT doesn't matter anyway.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
John Robinson
2012-09-23 12:00:51 UTC
Permalink
[...]
I'm saying that when the Linux kernel boots, and the AHCI driver starts, it sees the IMSM member discs as raw discs, which get probed for partitions. The GPT partition probe spots one of the copies of the GPT partition, but can't find the other one because the IMSM metadata's there.
The IMSM metadata will go at the end of the PHYSICAL disk. If you partition the virtual disk, i.e. the array, /dev/md/imsm0, then the GPT alternate header goes at the end of the virtual disk, NOT the end of the physical disk.
That's right. But the main GPT partition table will go at LBA=1 of the
virtual disc, which maps to LBA=1 of the physical disc. So when the
physical disc gets probed for partitions, the main GPT partition table
is visible at LBA=1, but there isn't a secondary GPT partition table at
the end of the disc, hence the error.

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-23 17:44:32 UTC
Permalink
That's right. But the main GPT partition table will go at LBA=1 of the virtual disc, which maps to LBA=1 of the physical disc. So when the physical disc gets probed for partitions, the main GPT partition table is visible at LBA=1, but there isn't a secondary GPT partition table at the end of the disc, hence the error.
The primary header contains the location of the alternate header. So the kernel wouldn't be looking at the end of the physical disk if the GPT were created for the virtual disk, and not the physical disk.

If IMSM only applies an offset to the end of the disk, such that it merely changes the last usable LBA and therefore there is a 1:1 correlation between array LBA's and physical disk LBA's, the kernel would find the alternate GPT header. Yet it isn't, so still something isn't right.

Chris Murphy


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-23 18:53:07 UTC
Permalink
=20
If IMSM only applies an offset to the end of the disk, such that it m=
erely changes the last usable LBA and therefore there is a 1:1 correlat=
ion between array LBA's and physical disk LBA's, the kernel would find =
the alternate GPT header. Yet it isn't, so still something isn't right.

1.
IMSM metadata starts at LBA -32 from the end of the physical disk after=
issuing this command:
mdadm -C /dev/md/imsm /dev/sd[bc] -n 2 -e imsm

At this point nothing else is on the disk, per hexdump.

2.
A small amount of additional data is added in those reserve 32 sectors =
at the end of the physical disk after issuing this command:
mdadm -C /dev/md/vol0 /dev/md/imsm -n 2 -l 1

3.
gdisk sees /dev/sdb as having 16777216 sectors.
gdisk sees /dev/md/vol0 has having 16769024 sectors.


4.
Upon creating a GPT on /dev/md/vol0, identical structures at identical =
byte offsets are created on /dev/sd[bc] and /dev/md/vol0.

The primary GPT header says the alternate header is at 0xffdfff.
The alternate GPT header is at 0xffdfff., or sector 16769023, right whe=
re it should be.

Regardless of whether the kernel sees the disk as Intel RAID or a bare =
disk, it finds the alternate GPT header, and doesn't complain about any=
thing.


Conclusions:
A. There is a 1:1 correlation between physical disk LBA and array LBA (=
at least for RAID 1), there is merely an offset at the end of the disk =
for IMSM metadata.

B. G=FCnther's disks had GPTs made on the physical disk devices themsel=
ves, prior to the creation of IMSM metadata. When IMSM metadata was cre=
ated, the alternate GPT header and table data were squished because it =
was in the wrong location.

C. The fact G=FCnther's "mdadm -E" command on the physical disks reveal=
s both are in a dirty state, indicates to me the array is not assembled=
, and is not presently mirroring. So I think he's actually not booted f=
rom or using the array at all, at least not from within linux.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
John Robinson
2012-09-24 09:37:31 UTC
Permalink
On 23/09/2012 19:53, Chris Murphy wrote:
[...]
Upon creating a GPT on /dev/md/vol0, identical structures at identical byte offsets are created on /dev/sd[bc] and /dev/md/vol0.
The primary GPT header says the alternate header is at 0xffdfff.
The alternate GPT header is at 0xffdfff., or sector 16769023, right where it should be.
Regardless of whether the kernel sees the disk as Intel RAID or a bare disk, it finds the alternate GPT header, and doesn't complain about anything.
Yes, it does. It finds the alternate header just fine, according to the
offset in the primary header, but warns that it's not at the end of the
disc, which is where it expected to find it:

Sep 17 09:54:27 techz kernel: [ 1.204681] GPT:Primary header thinks Alt.
header is not at the end of the disk.
Sep 17 09:54:27 techz kernel: [ 1.204685] GPT:625137663 != 625142447
Sep 17 09:54:27 techz kernel: [ 1.204687] GPT:Alternate GPT header
not at
the end of the disk.
Sep 17 09:54:27 techz kernel: [ 1.204689] GPT:625137663 != 625142447

I know that's not what I said earlier, so my apologies for that, but it
remains true that this is not a problem and there are no more GPT
headers anywhere else on the disc than those written inside the IMSM volume.

Cheers,

John.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-24 17:35:57 UTC
Permalink
[...]
Upon creating a GPT on /dev/md/vol0, identical structures at identic=
al byte offsets are created on /dev/sd[bc] and /dev/md/vol0.
=20
The primary GPT header says the alternate header is at 0xffdfff.
The alternate GPT header is at 0xffdfff., or sector 16769023, right =
where it should be.
=20
Regardless of whether the kernel sees the disk as Intel RAID or a ba=
re disk, it finds the alternate GPT header, and doesn't complain about =
anything.
=20
Yes, it does. It finds the alternate header just fine, according to t=
he offset in the primary header, but warns that it's not at the end of =
the disc, which is where it expected to find it:

Not for me, is what I meant. Linux 3.5.3-1.fc17 does not complain about=
the alternate GPT header not being at the end of the disk. However, pa=
rted 3.0 does. And gdisk 0.8.5 does not.

=46urther, it seems increasingly clear as I'm reading the UEFI spec on =
GPT, that IMSM is incompatible with GPT. The GPT alternate header by sp=
ec is expected at the end of the disk, but then IMSM also demands to be=
in basically the exact same location. And yet Intel is shipping UEFI h=
ardware, which require GPT disks, with IMSM on board that also requires=
metadata in the same location? How is this not a WTF moment?

I'm still wondering why parted reports G=FCnther's disks have hybrid MB=
Rs. That's weird, even if unrelated. And I wonder why his kernel compla=
ins about the GPTs not being at the end of the disk, but my kernel does=
n't. Both are 3.5 kernels.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Roberto Spadim
2012-09-24 18:17:47 UTC
Permalink
i used some disks one or two years ago of 1.5TB, when i boot the disk
for the first time the bios recognized it as a 1TB disk, i change some
bios parameters (removed from automatic) and i could put it to work
with 1.5TB could you check if it=B4s a bios problem? in my case the dis=
k
appears as 1TB when it booted wrong, and after changing bios options
it goes nice with 1.5TB
[...]
Upon creating a GPT on /dev/md/vol0, identical structures at ident=
ical byte offsets are created on /dev/sd[bc] and /dev/md/vol0.
The primary GPT header says the alternate header is at 0xffdfff.
The alternate GPT header is at 0xffdfff., or sector 16769023, righ=
t where it should be.
Regardless of whether the kernel sees the disk as Intel RAID or a =
bare disk, it finds the alternate GPT header, and doesn't complain abou=
t anything.
Yes, it does. It finds the alternate header just fine, according to=
the offset in the primary header, but warns that it's not at the end o=
Not for me, is what I meant. Linux 3.5.3-1.fc17 does not complain abo=
ut the alternate GPT header not being at the end of the disk. However, =
parted 3.0 does. And gdisk 0.8.5 does not.
Further, it seems increasingly clear as I'm reading the UEFI spec on =
GPT, that IMSM is incompatible with GPT. The GPT alternate header by sp=
ec is expected at the end of the disk, but then IMSM also demands to be=
in basically the exact same location. And yet Intel is shipping UEFI h=
ardware, which require GPT disks, with IMSM on board that also requires=
metadata in the same location? How is this not a WTF moment?
I'm still wondering why parted reports G=FCnther's disks have hybrid =
MBRs. That's weird, even if unrelated. And I wonder why his kernel comp=
lains about the GPTs not being at the end of the disk, but my kernel do=
esn't. Both are 3.5 kernels.
Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid"=
in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-24 18:55:13 UTC
Permalink
i used some disks one or two years ago of 1.5TB, when i boot the disk=
for the first time the bios recognized it as a 1TB disk, i change some=
bios parameters (removed from automatic) and i could put it to work wi=
th 1.5TB could you check if it=B4s a bios problem? in my case the disk =
appears as 1TB when it booted wrong, and after changing bios options it=
goes nice with 1.5TB

I don't think that's related. The disks size aren't being misreported. =
And it's also a UEFI [1] computer, not BIOS [2], therefore again there =
shouldn't be any concern about firmware induced disk size misinterpreta=
tion. But then, I'd also not expect a UEFI computer to offer a GPT inco=
mpatible RAID implementation either =96 I'm still hoping I've got this =
wrong somehow.


Chris Murphy


[1] UEFI has no bugs. Not a single one.

[2] I can't be the only one who finds it irritating that even the manuf=
acturers persist in conflating UEFI and BIOS: All of Intel's firmware d=
ownloads for G=FCnther's/OP's motherboard are referred to as BIOS. I re=
ally wish they wouldn't do that.--
To unsubscribe from this list: send the line "unsubscribe linux-raid" i=
n
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Miquel van Smoorenburg
2012-09-25 07:33:45 UTC
Permalink
Further, it seems increasingly clear as I'm reading the UEFI spec on
GPT, that IMSM is incompatible with GPT. The GPT alternate header by
spec is expected at the end of the disk, but then IMSM also demands
to be in basically the exact same location. And yet Intel is shipping
UEFI hardware, which require GPT disks, with IMSM on board that also
requires metadata in the same location? How is this not a WTF
moment?
It isn't. It's just RAID setup with the superblock at the end of the
disk- as soon as RAID1 is activated, the RAID volume you see is just a
bit smaller than the raw disksize, and the GPT alternate header is at
the right place- at the end of the RAID volume.

The thing is that the Linux kernel detects the GPT partition table on
the raw disks before the RAID1 volume is assembled. More of a cosmetical
bug.

People have argued before that the kernel should do no partitiontable
discovery at all, and just leave it to userspace. That's what kpartx is
for, for example. In that case, with a correctly ordered and configured
stack, the disks would get detected, any whole-disk RAID volumes would
get assembled, and only then partition detection would be done. But I
think that never got popular because of all the "I want to boot a kernel
without an initramfs" people.

Mike.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-21 21:30:03 UTC
Permalink
And more, since you mentioned mdadm and dmraid:
http://forums.gentoo.org/viewtopic-t-888520-start-0.html

It sounds like you need to pick one, and the one to pick is mdraid, and expressly disable dmraid.

I personally would consider that you go into BIOS and blow away this RAID. Recreate it from scratch, and partition with gdisk from a LiveCD (e.g. gdisk is on the Fedora 17 livecd). Then install Windows. Then go back to the LiveCD and confirm that the GPT is still OK. Then install linux.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Murphy
2012-09-21 21:49:53 UTC
Permalink
Another thing to check is if your SATA controller has it's own RAID, with RAID vs AHCI modes. If so, make sure it's in AHCI mode. If you have two RAIDs configured and don't know about it, that'll also cause problems.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...