If you are looking for the epic motorcycle journey blog that I've written, please see the Miles By Motorcycle site I put together. 
  • Subscribe to this RSS Feed
  • Adventures in Virtualizing a Linux/CentOS 5 server that has three RAID1 volumes
    04/04/2016 3:41PM

    This is here mostly so I have something to refer back to: 

    A server my buddy Duncan runs is growing long in the tooth but we are not ready to retire it. It's a Linux CentOS 5 install running RAID1 across two physical drives.

    It seemed to make sense to turn it into a virtual machine running under VirtualBox

    We rebooted the server using a Live CD distribution, connected a large USB 3 drive, and proceeded to copy each of the physical drives to files on the USB drive simply using dd. In our case the commands were:

    # dd if=/dev/sda of=/mnt/usbdrive/sda.dd bs=1M

    # dd if=/dev/sdb of=/mnt/usbdrive/sdb.dd bs=1M

    (Do not just blindly copy these commands. For your installation the parameters are likely to be very different.)

    I then went about trying to get these images to boot up under VirtualBox. 

    The first step is to use vboxmanage to convert the raw dump files into a format the VirtualBox supports. The documentation seemed to imply the VDI format was the correct one to use so after I plugged the USB drive in with the two large disk dumps, I ran:

    # vboxmanage convertfromraw /mnt/usbdrive/sda.dd sda.vdi --format=VDI 

    # vboxmanage convertfromraw /mnt/usbdrive/sdb.dd sdb.vdi --format=VDI

    It took quite a while but eventually I had two shiny new virtual drives that mirrored the physical drives on the old server. 

    Using the VirtualBox UI I created a 32bit Linux/Redhat server instance with 2048 megs of ram and added the two drives to it.

    I then tried to boot it and received an immediate "No Boot Device Found" error. 

    I thought it was just matter of re-running grub-install in the image but it turned out to be much more involved.

    Key points that I leave here mostly for my own reference:

    The CentOS 5 install DVD includes the familiar "linux rescue" mode. Interestingly enough, it refused to see the RAID array. (i.e. cat /proc/mdstat produced nothing).

    It should also be noted that under rescue mode in CentOS 5 using mdadm you must refer to the virtual devices (drives) using the UUID of the drive and not the device file name.  That took quite a while to realize. 

    Interestingly, booting the CentOS Live DVD immediately saw the RAID array and mounted it under /mnt/lvm/VolGroup...

    But /boot was not mounted and I noticed that the RAID array for the filesystem partition (/dev/md2 in this case) was in degraded mode.

    I attempted to mount /boot (/dev/md0 in my case) under /boot and chroot to it to run grub-install /dev/sda to no avail.

    However, booting back in the linux rescue mode from the install DVD did suddenly recognize and correctly mount the raid array despite /dev/md2 being in degraded mode. I chrooted into the mounted filesystem and ran grub-install /dev/sda and tried to reboot to fire it up. The filesystem spewed out countless errors.

    So I started over.  

    It was then I noticed, using vboxmanage showhdinfo,  that the sdb.VDI file was truncated. Apparently the copy of the data from that drive failed without me realizing it. The volume was missing a few gigs. sda.VDI was 76319 megs large but sdb.VDI was only a bit larger than 71000 megs. 

    So I used vboxmanage  modifymd modifyhd --resize 76319 sdb.VDI to make the two drives of identical side and booted back into the live DVD. 

    I made sure none of the partitions were mounted then I used dd to copy the presumably good /dev/sda3 to /dev/sdb3 using

    # dd if=/dev/sda3 of=/dev/sdb3 bs=1M

    Once that completed, I attempted to add the failed partition back into the /dev/md2 raid array using:

    # mdadm --re-add /dev/md2 /dev/sdb3

    Which to my shock worked. 

    Using

    # mdadm -D /dev/md2

    I watched as the /dev/sdb3 partition was synchronized with /dev/sda3. I waited until that process was finished and the array was back to normal. 

    Then I booted back into the rescue mode from the install DVD (since running grub-install with the live DVD was failing). This time I noticed it identified and correctly auto-mounted the entire array.

    I chrooted into the mounted filesystem and ran

    # grub-install /dev/sda

    It succeeded without error. Then I simply rebooted and the virtual machine came up without complaint.

  • Who'da thunk
    07/08/2015 11:10PM
    Ian

    As a fan of OpenBSD and open source in general, I find this most interesting.  Microsoft just became the first "Gold" donor to the OpenBSD Foundation ever.  Every single company that uses OpenSSH - from Google and Facebook to Oracle, IBM, and HP to Red Hat and Cisco - none of them have ever contributed to the support of OpenSSH as much as Microsoft has.  Really, we're talking peanuts here.  Microsoft is the first to donate more than $25,000.  Google and Facebook come in in the $10,000 to $25,000 range.  You think the open source community supports itself?  Come on, Google...

    It's kinda lame to post Slashdot links, but this is a good one.

    http://undeadly.org/cgi?action=article&sid=20150708134520

    http://www.openbsdfoundation.org/contributors.html

    http://bsd.slashdot.org/story/15/07/08/2220235/microsoft-thanked-for-its-significant-financial-donation-to-openbsd-foundation

  • Ubuntu 14.04 LTS nodejs npm install generates sudo unable to resolve host error
    12/21/2014 12:38PM

    In attempting to install grunt under Ubuntu 14.04 I kept getting the infamous error:

    grunt@0.4.5 /home/yml/sudo: unable to resolve host cylon 

    However, in my case I had resolved the sudo error some time ago by setting /etc/hosts and /etc/hostname correctly. It turns out that while installing Titanium Appcelerator, /etc/npmrc was updated with some trash entries listing directory names which were the full text of the sudo error.

    Setting prefix="" in /etc/npmrc resolved the problem for me and I was able to install grunt and other node packages. 

  • Lubuntu disregarding disabling light locker screen locker settings
    11/01/2014 2:27PM

    It turns out the system wide light locker (Screen locker password prompt after inactivity) setting incorrectly overrides the per-user account one. 

    Move the system wide one to a backup file:

    sudo mv /etc/xdg/autostart/light-locker.desktop /etc/xdg/autostart/light-locker.desktop.bak

    Then make sure the Exec= line is empty in:

    ~/.config/autostart/light-locker.desktop

    See this question: http://askubuntu.com/questions/502942/lubuntu-enforces-screen-lock 

  • Getting WIFI Wireless to work under Lubuntu 14.40 on a Thinkpad X100E
    10/30/2014 1:02PM

    This represented over an hour of my life.

    After upgrading to Lubuntu 14.04 LTS on my Lenovo Thinkpad X100E, I was unable to enable the wifi adapter. 

    In a terminal window "rfkill list" showed that wlan0 was soft-blocked. Enabling it using rfkill did not solve my problem as the UI would continue to show wifi as disabled.

    Clicking on nm-applet would continue to show "Wifi disabled". 

    After over an hour of searching around the solution presented itself.

    Right click on nm-applet and click on the Enable Wifi item on /that/ menu. If you left click, "Enable Wifi" is greyed out. If you right click, a different menu is presented with a working "Enable Wifi" item.