M BUZZ CRAZE NEWS
// general

Recover Orphaned RAID 0 from Silicon Image controller with Linux Software RAID?

By Daniel Rodriguez

I had an old XP machine in Raid0, 2 x 120gb drives on a Sil3112 controller. The drives are intact (afaik) but the motherboard or P4 is toast.

Id love to recover precious photos from the drive or dump its entire contents onto an external. I am beginning to understand that Ubuntu might be able to rebuild the array on a modern system so I can recover data using the liveCD. Is this correct?

If so do I need a machine with existing Raid ports/controllers etc or do I just need a board with regular sata connections. Many thanks in advance for any assistance in this topic.

1

2 Answers

Had pretty much the same problem with an it8212 "fake RAID" controller, and a couple of disks on RAID0. Surprisingly, mdadm could solve the problem, with a little trial and error, though. If you know/remember the chunksize (the quantum of each parallel disk read/write for each disk), you can try something like that (don't try it just yet):

 sudo mdadm --build --verbose --run --chunk=64 /dev/md0 --level=raid0 --raid-devices=2 /dev/sde /dev/sdd

If /dev/md0 was already occupied, feel free to use incremental numbers instead of 0. Of course, /dev/sde and /dev/sdd where the disks in my case, and 64k was my chunksize, but you can easily determine your own configuration with: sudo fdisk -l

This lists all partitions of all your physical discs. Those two that interest us should be exactly identical in manner of size, cylinders, sectors, etc. Furthermore, since parameter ordering definitely matters, the first one (in place of my /dev/sde) should appear to have a partition table (or a part of it, if you had used extended dos partitions), and the second one (my /dev/sdd) would appear to be totally corrupt. Be careful not to touch your other, non-raid disks, though :-) OTAH, if both appear invalid, you can stop reading this answer, it wont work for you :-(

Using --build (instead of --create) has the advantage of skipping the creation of superblock information, avoiding risk of data erase, and, most importantly, starting real data from sector 0, thus creating a so called "legacy" assembly, which is what most "fake raid" chipset vendors would do.(educated guess: their superblock info is written on chipset NV memory, therefore they don't really need to write any metadata on the disks).

Now, if you don't remember the original chunk size, trial and error is required... Try building with a different chunk size each time, then use fdisk -l on /dev/md0 explicitly, and see if partition info is displayed correctly. If so, try mounting one of the discovered partitions as read-only and verify some data (preferably text), in order to be sure. If not, undo. To undo mdadm building, and make disk available again, use sudo mdadm --stop /dev/md0 and sudo mdadm --remove /dev/md0Then try again, with different chunk size.

Although mounting would suffice (ability to mount corrupt partitions is extremely rare), here's a somewhat more advanced trick to check correct chunk size, by searching for partition identifier string. Hopefully, in a valid partition table, the starting sectors of one or more recognizable partitions are described. Again, in my case:

 /dev/sde1 * 63 47118644 23559291 7 HPFS/NTFS/exFAT /dev/sde2 47118645 102896324 27888840 7 HPFS/NTFS/exFAT /dev/sde4 102896325 980463014 438783345 5 Extended

In almost every case, you can find identifier strings close to the beginning of every type of partition. ie. 'NTFS' itself, in case of NTFS. Using dd, copy the first few sectors of the disk (/dev/md0), onto a file.

 sudo dd if=/dev/md0 of=testfile bs=1024 count=256 skip=0

This should copy the first (skip=0) 256K of disk data to 'testfile'. Now, using something like:

strings -a -t d testfile | grep NTFS

or x instead of d if you prefer hex, or, more simply,

hexdump -C testfile | less

then, search with '/'

and you can find the position of the string in the entire disk. Compare that with the calculated partition offset. For example, in my case, the first ntfs partition started at 63th sector, which leads to an offset of 63*512 = 32256. 'NTFS' string was found at place 32259, so we'd consider this 'a match' (3 bytes post the beginning). (Don't forget to add skip * bs to your calculation, if some non-zero skip is used in dd). Unfortunately, a match doesn't necessarily mean it's the correct chunk size, while a mismatch would mean it surely isn't.

1

As far a I know, you can't do that with mdadm/Linux Software RAID since the only foreign metadata formats it supports are Intel(R) Matrix Storage Manager and DDF.

You have to get a exact or similar motherboard or a controller with the same chip (and probably firmware) and be extremely cautious when "importing" the drives or otherwise your data is lost.

I've been there long time ago with Silicon Image chips and realized how bad this solution is (you should always have an exact spare controller ready in case something bad happens).

Fake RAID support

The articles on dmraid/Fake RAID from the Ubuntu Community Documentation and the Arch Linux Wiki might be of help to you.

Linux Software RAID vs Fake RAID

Excerpt from the Linux Raid Wiki on fake RAID:

Proper hardware RAID systems are presented to linux as a block device [...]

BIOS / firmware RAID aka fake raid cards:

  • [...]
  • if the 'raid' card or motherboard dies then you often have to find an exact replacement and this can be tricky for older cards
  • if drives move to other machines the data can't easily be read
  • there is usually no monitoring or reporting on the array - if a problem occurs then it may not show up unless the machine is rebooted and someone is actually watching the BIOS boot screen (or until multiple errors occur and your data is lost)
  • you are entrusting your data to unpatchable software written into a BIOS that has probably not been tested, has no support mechanism and almost no community.
  • [...]

Given the point of RAID is usually to reduce risk it is fair to say that using fakeraid is a terrible idea and it's better to focus energy on either true HW raid or in-kernel SW raid [...]

4

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy