If you are looking for the epic motorcycle journey blog that I've written, please see the Miles By Motorcycle site I put together. 
  • Adventures in Virtualizing a Linux/CentOS 5 server that has three RAID1 volumes
    04/04/2016 3:41PM

    This is here mostly so I have something to refer back to: 

    A server my buddy Duncan runs is growing long in the tooth but we are not ready to retire it. It's a Linux CentOS 5 install running RAID1 across two physical drives.

    It seemed to make sense to turn it into a virtual machine running under VirtualBox

    We rebooted the server using a Live CD distribution, connected a large USB 3 drive, and proceeded to copy each of the physical drives to files on the USB drive simply using dd. In our case the commands were:

    # dd if=/dev/sda of=/mnt/usbdrive/sda.dd bs=1M

    # dd if=/dev/sdb of=/mnt/usbdrive/sdb.dd bs=1M

    (Do not just blindly copy these commands. For your installation the parameters are likely to be very different.)

    I then went about trying to get these images to boot up under VirtualBox. 

    The first step is to use vboxmanage to convert the raw dump files into a format the VirtualBox supports. The documentation seemed to imply the VDI format was the correct one to use so after I plugged the USB drive in with the two large disk dumps, I ran:

    # vboxmanage convertfromraw /mnt/usbdrive/sda.dd sda.vdi --format=VDI 

    # vboxmanage convertfromraw /mnt/usbdrive/sdb.dd sdb.vdi --format=VDI

    It took quite a while but eventually I had two shiny new virtual drives that mirrored the physical drives on the old server. 

    Using the VirtualBox UI I created a 32bit Linux/Redhat server instance with 2048 megs of ram and added the two drives to it.

    I then tried to boot it and received an immediate "No Boot Device Found" error. 

    I thought it was just matter of re-running grub-install in the image but it turned out to be much more involved.

    Key points that I leave here mostly for my own reference:

    The CentOS 5 install DVD includes the familiar "linux rescue" mode. Interestingly enough, it refused to see the RAID array. (i.e. cat /proc/mdstat produced nothing).

    It should also be noted that under rescue mode in CentOS 5 using mdadm you must refer to the virtual devices (drives) using the UUID of the drive and not the device file name.  That took quite a while to realize. 

    Interestingly, booting the CentOS Live DVD immediately saw the RAID array and mounted it under /mnt/lvm/VolGroup...

    But /boot was not mounted and I noticed that the RAID array for the filesystem partition (/dev/md2 in this case) was in degraded mode.

    I attempted to mount /boot (/dev/md0 in my case) under /boot and chroot to it to run grub-install /dev/sda to no avail.

    However, booting back in the linux rescue mode from the install DVD did suddenly recognize and correctly mount the raid array despite /dev/md2 being in degraded mode. I chrooted into the mounted filesystem and ran grub-install /dev/sda and tried to reboot to fire it up. The filesystem spewed out countless errors.

    So I started over.  

    It was then I noticed, using vboxmanage showhdinfo,  that the sdb.VDI file was truncated. Apparently the copy of the data from that drive failed without me realizing it. The volume was missing a few gigs. sda.VDI was 76319 megs large but sdb.VDI was only a bit larger than 71000 megs. 

    So I used vboxmanage  modifymd modifyhd --resize 76319 sdb.VDI to make the two drives of identical side and booted back into the live DVD. 

    I made sure none of the partitions were mounted then I used dd to copy the presumably good /dev/sda3 to /dev/sdb3 using

    # dd if=/dev/sda3 of=/dev/sdb3 bs=1M

    Once that completed, I attempted to add the failed partition back into the /dev/md2 raid array using:

    # mdadm --re-add /dev/md2 /dev/sdb3

    Which to my shock worked. 


    # mdadm -D /dev/md2

    I watched as the /dev/sdb3 partition was synchronized with /dev/sda3. I waited until that process was finished and the array was back to normal. 

    Then I booted back into the rescue mode from the install DVD (since running grub-install with the live DVD was failing). This time I noticed it identified and correctly auto-mounted the entire array.

    I chrooted into the mounted filesystem and ran

    # grub-install /dev/sda

    It succeeded without error. Then I simply rebooted and the virtual machine came up without complaint.