If you are looking for the epic motorcycle journey blog that I've written, please see the Miles By Motorcycle site I put together. 
  • Subscribe to this RSS Feed
  • Ian
    File Transfers without SSH
    11/16/2016 9:03PM

    I had an interesting problem at work related to copying files from one Linux server to another.  The problem was this: I had login access to both servers and on each server I had sudo access to a service account, which does not have login access.  The files that I had to copy were owned by the service account and many of these were not readable by my own account.  Normally this wouldn't be a problem - I would sudo to the service account, tar up the files, copy that over using my login account, then untar it on the other side using the service account.  However, there were many gigs of data and not enough disk space on the server to create a tar file, even compressed.

    Netcat to the rescue!  On the receiving end, I started a netcat listener using the service account, piping the input to the tar extract command.

    nc -l 9999 | tar -xvf -

     On the sending side, I piped the output of tar's create command to the listener on the other server.

    tar c dir_to_copy | nc <receiver_ip_address> 9999

    On the receiving side, because of the 'v' (verbose) option to tar, the file and directory names go flying by as the data gets transferred.  When the transfer is complete, nc closes automatically on the receiving side.

    So that worked for the initial transfer, but I also wanted to be able to do an rsync-style update later to get any files that may have been modified, and not have to re-copy everything.  Using a slightly trickier version of the commands above, I restarted the listener on the receiver and ran the following on the sender:

    find dir_to_copy -mtime 0 -type f | tar c -T - | nc <receiver_ip_address> 9999

    I've also seen it recommended to use -print0 with the find and then --null in the tar command.

    find dir_to_copy -mtime 0 -type f -print0 | tar c --null -T - | nc <receiver_ip_address> 9999

    which is what I actually ran yesterday, but it seems to work both ways.  -print0 prints the full path of the files, but without a newline character at the end, making it look like one long string.  The --null option to tar tells it to look for this delimiter.


  • Adventures in Virtualizing a Linux/CentOS 5 server that has three RAID1 volumes
    04/04/2016 3:41PM

    This is here mostly so I have something to refer back to: 

    A server my buddy Duncan runs is growing long in the tooth but we are not ready to retire it. It's a Linux CentOS 5 install running RAID1 across two physical drives.

    It seemed to make sense to turn it into a virtual machine running under VirtualBox

    We rebooted the server using a Live CD distribution, connected a large USB 3 drive, and proceeded to copy each of the physical drives to files on the USB drive simply using dd. In our case the commands were:

    # dd if=/dev/sda of=/mnt/usbdrive/sda.dd bs=1M

    # dd if=/dev/sdb of=/mnt/usbdrive/sdb.dd bs=1M

    (Do not just blindly copy these commands. For your installation the parameters are likely to be very different.)

    I then went about trying to get these images to boot up under VirtualBox. 

    The first step is to use vboxmanage to convert the raw dump files into a format the VirtualBox supports. The documentation seemed to imply the VDI format was the correct one to use so after I plugged the USB drive in with the two large disk dumps, I ran:

    # vboxmanage convertfromraw /mnt/usbdrive/sda.dd sda.vdi --format=VDI 

    # vboxmanage convertfromraw /mnt/usbdrive/sdb.dd sdb.vdi --format=VDI

    It took quite a while but eventually I had two shiny new virtual drives that mirrored the physical drives on the old server. 

    Using the VirtualBox UI I created a 32bit Linux/Redhat server instance with 2048 megs of ram and added the two drives to it.

    I then tried to boot it and received an immediate "No Boot Device Found" error. 

    I thought it was just matter of re-running grub-install in the image but it turned out to be much more involved.

    Key points that I leave here mostly for my own reference:

    The CentOS 5 install DVD includes the familiar "linux rescue" mode. Interestingly enough, it refused to see the RAID array. (i.e. cat /proc/mdstat produced nothing).

    It should also be noted that under rescue mode in CentOS 5 using mdadm you must refer to the virtual devices (drives) using the UUID of the drive and not the device file name.  That took quite a while to realize. 

    Interestingly, booting the CentOS Live DVD immediately saw the RAID array and mounted it under /mnt/lvm/VolGroup...

    But /boot was not mounted and I noticed that the RAID array for the filesystem partition (/dev/md2 in this case) was in degraded mode.

    I attempted to mount /boot (/dev/md0 in my case) under /boot and chroot to it to run grub-install /dev/sda to no avail.

    However, booting back in the linux rescue mode from the install DVD did suddenly recognize and correctly mount the raid array despite /dev/md2 being in degraded mode. I chrooted into the mounted filesystem and ran grub-install /dev/sda and tried to reboot to fire it up. The filesystem spewed out countless errors.

    So I started over.  

    It was then I noticed, using vboxmanage showhdinfo,  that the sdb.VDI file was truncated. Apparently the copy of the data from that drive failed without me realizing it. The volume was missing a few gigs. sda.VDI was 76319 megs large but sdb.VDI was only a bit larger than 71000 megs. 

    So I used vboxmanage  modifymd modifyhd --resize 76319 sdb.VDI to make the two drives of identical side and booted back into the live DVD. 

    I made sure none of the partitions were mounted then I used dd to copy the presumably good /dev/sda3 to /dev/sdb3 using

    # dd if=/dev/sda3 of=/dev/sdb3 bs=1M

    Once that completed, I attempted to add the failed partition back into the /dev/md2 raid array using:

    # mdadm --re-add /dev/md2 /dev/sdb3

    Which to my shock worked. 

    Using

    # mdadm -D /dev/md2

    I watched as the /dev/sdb3 partition was synchronized with /dev/sda3. I waited until that process was finished and the array was back to normal. 

    Then I booted back into the rescue mode from the install DVD (since running grub-install with the live DVD was failing). This time I noticed it identified and correctly auto-mounted the entire array.

    I chrooted into the mounted filesystem and ran

    # grub-install /dev/sda

    It succeeded without error. Then I simply rebooted and the virtual machine came up without complaint.

  • Who'da thunk
    07/08/2015 11:10PM
    Ian

    As a fan of OpenBSD and open source in general, I find this most interesting.  Microsoft just became the first "Gold" donor to the OpenBSD Foundation ever.  Every single company that uses OpenSSH - from Google and Facebook to Oracle, IBM, and HP to Red Hat and Cisco - none of them have ever contributed to the support of OpenSSH as much as Microsoft has.  Really, we're talking peanuts here.  Microsoft is the first to donate more than $25,000.  Google and Facebook come in in the $10,000 to $25,000 range.  You think the open source community supports itself?  Come on, Google...

    It's kinda lame to post Slashdot links, but this is a good one.

    http://undeadly.org/cgi?action=article&sid=20150708134520

    http://www.openbsdfoundation.org/contributors.html

    http://bsd.slashdot.org/story/15/07/08/2220235/microsoft-thanked-for-its-significant-financial-donation-to-openbsd-foundation

  • Ubuntu 14.04 LTS nodejs npm install generates sudo unable to resolve host error
    12/21/2014 12:38PM

    In attempting to install grunt under Ubuntu 14.04 I kept getting the infamous error:

    grunt@0.4.5 /home/yml/sudo: unable to resolve host cylon 

    However, in my case I had resolved the sudo error some time ago by setting /etc/hosts and /etc/hostname correctly. It turns out that while installing Titanium Appcelerator, /etc/npmrc was updated with some trash entries listing directory names which were the full text of the sudo error.

    Setting prefix="" in /etc/npmrc resolved the problem for me and I was able to install grunt and other node packages. 

  • Lubuntu disregarding disabling light locker screen locker settings
    11/01/2014 2:27PM

    It turns out the system wide light locker (Screen locker password prompt after inactivity) setting incorrectly overrides the per-user account one. 

    Move the system wide one to a backup file:

    sudo mv /etc/xdg/autostart/light-locker.desktop /etc/xdg/autostart/light-locker.desktop.bak

    Then make sure the Exec= line is empty in:

    ~/.config/autostart/light-locker.desktop

    See this question: http://askubuntu.com/questions/502942/lubuntu-enforces-screen-lock