Ian Young
2014-10-16 19:59:18 UTC
I've been trying to fix a degraded array for a couple of months now
and it's getting frustrating enough that I'm willing to put a bounty
on the correct solution. The array can start in a degraded state and
the data is accessible, so I know this is possible to fix. Any
takers? I'll bet someone could use some beer money or a contribution
to their web hosting costs.
Here's how the system is set up: There are (6) 3 TB drives. Each
drive has a BIOS boot partition. The rest of the space on each drive
is a large GPT partition that is combined in a RAID 10 array. On top
of the array there are four LVM volumes: /boot, /root, swap, and /srv.
Here's the problem: /dev/sdf failed. I replaced it but as it was
resyncing, read errors on /dev/sde kicked the new sdf out and made it
a spare. The array is now in a precarious degraded state. All it
would take for the entire array to fail is for /dev/sde to fail, and
it's already showing signs that it will. I have tried forcing the
array to assemble using /dev/sd[abcde]2 and then forcing it to add
/dev/sdf2. That still adds sdf2 as a spare. I've tried "echo check >
/sys/block/md0/md/sync_action" but that finishes immediately and
changes nothing.
Can anyone solve this? I'd be happy to pay you for your knowledge.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
and it's getting frustrating enough that I'm willing to put a bounty
on the correct solution. The array can start in a degraded state and
the data is accessible, so I know this is possible to fix. Any
takers? I'll bet someone could use some beer money or a contribution
to their web hosting costs.
Here's how the system is set up: There are (6) 3 TB drives. Each
drive has a BIOS boot partition. The rest of the space on each drive
is a large GPT partition that is combined in a RAID 10 array. On top
of the array there are four LVM volumes: /boot, /root, swap, and /srv.
Here's the problem: /dev/sdf failed. I replaced it but as it was
resyncing, read errors on /dev/sde kicked the new sdf out and made it
a spare. The array is now in a precarious degraded state. All it
would take for the entire array to fail is for /dev/sde to fail, and
it's already showing signs that it will. I have tried forcing the
array to assemble using /dev/sd[abcde]2 and then forcing it to add
/dev/sdf2. That still adds sdf2 as a spare. I've tried "echo check >
/sys/block/md0/md/sync_action" but that finishes immediately and
changes nothing.
Can anyone solve this? I'd be happy to pay you for your knowledge.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html