Automated Backups of OpnSense

January 31st, 2017

I use rsnapshot, on Centos 7, to manage the vast majority of my backups across a myriad of linux servers both within my network and across the globe. I’ve never blogged about the entirety of that process, but I wanted to post a quick note about how I use rsnapshot to also backup the configuration of my router.

Until recently, I had been using this process to backup my pfSense routers. With my switch to OPNsense though, I was pleased to see the process is the same.

Basically, we just need to make a copy of a single folder on the machine, ‘/conf’, which is located right off of the root on both pfSense an OpnSense.

Step one is to make sure you have a user setup on the router that can connect over SSH. For OpnSense you need to log into the web portal and navigate to System -> Settings -> Administration. Check the box to ‘Enable Secure Shell’ and ‘Permit Password login’. (We’ll disable password logins in a moment)

Next, goto System -> Access -> Users and create a new user that you will use for SSH access. Once created, you then need to also give the user the ‘effective permission’ of ‘System: Shell Account Access’ which is just below the ‘Group Membership’ Section of the user’s account details.

Now, to allow rsnapshot to login as the user without prompting for a password you need to copy over to the router the public SSH key from the machine and user that runs rsnapshot. I prefer to use the ‘ssh-copy-id’ tool to do that. On your rsnapshot server, logged in as the user who runs rsnapshot, run ‘ssh-copy-id username@router_ip’. Enter your password and it will copy the required keys. The username to use is the one you created in the previous step on the router. The IP address should be the IP address of the router. I don’t allow external SSH access to my router, so I use the internal IP.

You can verify that the copy-id tool worked by trying to ssh into the router ssh username@router_ip. It should take a moment, and then drop you at a terminal prompt on the router. You can now take a look around and see what the system has to offer. You might do cd /conf and then ls to see the config file and history.

You can now disable ‘Permit Password Login’, since the user required for running the backup is setup to login without interactively entering a password.

With that part done you now need only update your rsnapshot.conf file; Don’t forget that the file uses ‘‘ to delineate options, not spaces, commas, etc..

I added a line to my rsnapshot.conf that says the following:

backup username@router_ip:/conf/ WDR-Router

The ‘WDR-Router’ piece at the end is the folder the resulting files are kept in with the rsnapshot folder structure on my server.

With that done, your router config files will be included in your backup schedule. Should something happen to your router, like hardware failure or a bad config gets pushed out, you can quickly recover.

Worst case scenario, you can just setup a new, clean OpnSense install and load your most recent config.xml file through the web portal via System -> Configuration -> Backups.

Good luck!

Cleaning up old Logs on Centos 7

January 16th, 2017

As often happens with computers of all types, log files build up over time. Generally speaking, the operating system will rotate these logs, which means it breaks them up into chunks, but it isn’t usually set to remove the old chunks. Over time, those log files can start to add up.

I am giving some thought to changing the provider of the VPS that hosts this web page, since performance is not consistent and the VPS is not very reliable, and I was curious to know how much disk space I needed for the system to run well. After doing some light clean up on the system I did a check to see where on the disk I was using the most space.

Using the following command, I did a search to find out which folders contained the most data:

du -a / | sort -n -r | head -n 10

This command reported that the folders using the most storage on my system were the folders containing log files, ‘/var/log/’. I cleaned a few of them up by hand, since I like to hold onto certain items for longer than others, but one folder in particular, /var/log/journal/ contained several GBs of data. This is the folder that holds the log files SystemD creates and contains basically every log entry for the system since its initial boot. I considered cleaning it up manually, but I wanted to know if I could keep this data in check going forward, which would help me run the webserver on a smaller disk. What I discoverd was a simple command to clear out logs older than a certain date, or larger then a certain size.

journalctl --vacuum-time=14d
journalctl --vacuum-size=1G

The above command looks through all the log files and purges any that are either old than the time set, or larger than the limit set.

I ran the command above and told it to keep the most recent 14 days worth of logs, which deleted 5.9GB of old log data on my system. I then added the command to crontab so it would maintain the logs at that level:

2 1 * * * root journalctl –vacuum-time=14d

This command runs the vacuum command each day at 2 minutes past 1AM.

My server now uses 2GB of storage, instead of 8GB. Not a bad result…

Interesting Failure Modes – SD Card Arching

August 22nd, 2016

I’ve been using Raspberry Pi computers for several years. A few months ago the Raspberry Pi B I had hooked up to my TV and running OpenElec, stopped working. The Pi was locked up and then wouldn’t reboot when I pulled the power cord.

It’s an original B model, with just two USB ports, so I didn’t think much of it. I prepped and put in place a replacement Pi, also running OpenElec, and threw the Pi into a drawer for further inspection..later. I got 4 years faithful service from a $35 computer so I wasn’t complaining.

Some weeks later, I had an idea that I wanted to use a Raspberry Pi for so I grabbed the questionable unit and got to work.

My standard way to prep a Pi is to use raspbian-ua-netinst ( It doesn’t require a huge download, and it doesn’t automatically install a GUI and a bunch of software I don’t want.

I knew that this Pi, and this same SD card, were from the failed system so I started by running some tests on the SD card before attempting the install. I didn’t have any trouble with the SD card, so I proceeded. Much to my surprise, the Pi booted the new SD card and completed its installation without any issue.

Bit of a head scratcher but it was working.

Since then the Raspberry Pi has been handling duties as a simple Stratum 2 NTP server (More on that in a later post…). It’s not seeing heavy load but it’s been working fine.

This morning, while logged into the Pi and monitoring NTP, the system just stopped being able to see the SD card. Since I was already logged into the system via ssh, my session stayed active but I couldn’t run any programs and my simple ntp watching command (watch -n 1 ntpq -p), started to return a blank screen. I quit the watch app, tried to ‘ls’ and ‘cd’ but the system just said command not found. I wasn’t even able to shut down normally, since the shutdown app wasn’t available either.

I pulled the SD from the Pi and plugged it into another SD card reader on a different system. Everything looked Ok and I was able to run fsck without any errors. I plugged the SD card back into the Pi and it didn’t even try to boot.

I went through a couple other little tasks, extra fsck runs, etc., and then, while holding the SD card, I noticed that the interface side was slightly arched. When place on a flat surface, contacts down, the card was raised in the middle. Obviously, this minor elevation change had stopped all of the contacts from making a good connection to the Pi.

I inserted the SD card back into the Pi, and while holding the center of the card down flat against the contacts, I applied power and was greeted with a normally booting system.


I pulled the card out and flexed it, just a little, a few times until it sat more flat and the Pi is now working again normally.

The Raspberry Pi is working great but the SD card has started to lose it shape after 4 years of near constant use. I’ve been toying with the idea of using an old 1U server case to rack mount a couple of Raspberry Pis and then using the SD card just for booting, using an external USB disk to host the OS. That idea is starting to make even more sense to me now. That being said, I suspect the SD card arching issue would be less likely to occur on all of the newer Pi models, which use MicroSD cards instead. They aren’t made from multiple pieces of plastic clipped together, like most SD cards.

Just an interesting failure mode that I thought ought to be shared…


How To: A Clean install of Windows 7 in 2016

April 25th, 2016

Maybe I’m alone here, but I still have occasion to install Windows 7 from time to time. For a while, that was no big deal; Install the OS, update, update, update and you’re good to go. Lately though, the process has become far more cumbersome. Not only does the process take ages to complete, but when you’re done its always nagging you to update to Windows 10. What follows is my process for getting this done without too much hassle.

Firstly, I’ve not created anything new here. I’ve simply found a series of tools and bit of information online that helped me along the way so I’m compiling it here for my future benefit.

Step 1 – Get the OS installed. After that, if your copy of Windows 7 doesn’t including Service Pack 1, go here and download it: Install. Reboot. Service packs include a large number of updates so it’s much faster to get the service pack installed first thing after the install than it is to use Windows Update to find it and download it.

Step 2 – There is a known issue with Windows 7 that causes the search for updates to take hours and sometimes days. You have the option of opening ‘Windows Update’ and telling it to check for updates manually but I find it far easier to tell the system to check for updates on its own and then leave the machine alone for 24 hours. It seems a bizarre step to take, and I often do waste a day waiting for the manual checking for updates to complete before seeing sense, rebooting and leaving it alone. In my experience, doing the manual check in a VM takes far longer than on physical hardware, regardless of the specs of the two machines.

Step 3 – Log in the next day and open Windows Update. It should have hundreds of updates ready for you to install. The next wrinkle I seem to run into is that clicking ‘Install Updates’ seems to sit there and think forever without actually doing anything. It just says ‘Preparing to install updates’. If this happens, use the Windows 7 shutdown option that includes installing updates. For whatever reason, this seems to work.

Step 4 – After the shutdown, power the computer back one and do a check for updates. At this point a manual check usually completes in a reasonable amount of time and you’re able to complete the updates as normal. It’s also around this time that update to Windows 10 nagging begins. Push forward though and install all of the available updates. The only thing I do at this point is ‘hide’ all of the language pack updates. Click on the top one and then shift click the bottom one to select them all. Right click any one of them and select ‘Hide Update’. They will grey out and not appear at your next check.

Step 5 – Time to clean up. Check again for updates and make sure you got them all. If not, install whatever is available. When you’re sure all updates have been installed, head over to this link (Sorry for the language…), and save the .bat file somewhere on your machine.

Be careful here. Running scripts from the web is pretty dangerous. Take a few minutes to read over it and make sure you understand what it’s doing. The script should basically be a series of wmic and wusa commands to remove updates, identified by their KB numbers. It will print out information about each update it removes as it goes along.

The batch file is a script someone else wrote that will uninstall all of the Windows 10 nagging updates, as well as the new telemetry ‘features’ they added to Windows 7, similar to the tracking features added in Windows 10. I didn’t write script and I don’t have any control over it…

Right click on the script and run it as an admin. Read what it says and then let it do it’s thing. It will pause when it’s complete. When it’s done, reboot.

Alternatively, if you don’t mind the new tracking features in Windows 7 and only want to get rid of the Windows 10 update nag, you can remove KB3035583, hide it from Windows Update and skip all of the script related procedures. Updates can be manually removed from the ‘Programs and Features’ section of Control Panel. Click ‘View Installed Updates’ on the left hand side, find the update that says ‘KB3035583’ and uninstall it. After it’s removed, reboot, check for updates and then hide the KB3035583 update in Windows Update.

Step 6 – Check for updates! Since we just uninstalled some Windows updates, there should be ‘new’ updates listed for installation. Obviously, we don’t want to install them so we’ll need to tell Windows to hide them. After the update search is complete, compare the updates listed for installation against the updates removed in the script.

Right click the script and choose ‘edit’. This will open it Notepad and you can compare the KB numbers. You’ll want to ‘hide’ any available updates that are listed in the script. Right click the update and select ‘Hide Update’. This will grey it out and it won’t show up when you search for updates next time.

Ideally, once you’re done hiding the updates correlating to the script, none should be left for installation. If some updates remain, get them installed and reboot.

Step 7 – Run the script one more time. This time, confirm that after each item it says ‘No Instance(s) Available’. This means it wasn’t able to remove the update, because it wasn’t installed. When each update shows that, you’re good to go. You won’t be nagged about Windows 10, you won’t have any of the new tracking ‘features’. You’ll (finally) be ready to set your machine up to do whatever task you needed.

Enjoy Windows 7!


Install Centos 6 on a Non-PAE Machine

March 12th, 2015

I have a Thecus N5200 that was modified to have a VGA port. Though the machine will run a variety of current Linux distributions, I wanted it to run Centos 6. Unfortunately, the N5200 doesn’t support PAE, which Centos 6 requires.

The first major problem is that a Non-PAE machine won’t even boot the Centos installer CD/DVD. You have to find some way around that. There are several ways to get around that but they are all quite complex and time-consuming. Plus, as time goes on they work less and less. The old software needed is harder and harder to find. I instead chose to simply install Stella. It’s a Desktop focused Linux distribution that is based on Centos 6 and the 32bit version includes a Non-PAE kernel. If you’re looking to install a Centos 6 desktop, install Stella and you’re all set. It’s great. The developer did a great job keeping things compatible with Centos 6 while also adding in things like video codecs and the like.

For my little project with the N5200 though I wanted to go a little further. I wanted to install Stella, to get around the NON-PAE requirement, and then convert it back to a standard Centos 6 installation… or close to it.

I booted the Stella DVD and installed it to disk. Once I booted into it from my hard drive I set about searching for and removing packages not part of the normal Centos install.

The first thing I did was search for installed packages from repos outside of a normal Centos installation. I used this command initially

yum list installed

to get a look at everything installed and where it was installed from. From there I started to use grep as well to filter things down.

yum list installed|grep dex

Many of the packages were from a repo that included the word ‘dextop’ so I filtered the list to show me those by piping the output to ‘grep dex’.

With a list packages, I use the ‘yum remove’ command to remove them:

yum remove a52dec amrwb b43-firmware bluecurve-metacity-theme faac ffmpeg-libs gnome-web-photo goocanvas faad2-libs gstreamer-plugins-bad gtkimageview lame-libs libdca libdvdcss libfribidi libmad libmimic libmms libmpeg2 libsidplay mjpegtools-libs nux-dextop-release opencore-amr opencore-amr perl-Gnome2-Canvas perl-Gnome2-VFS perl-Gnome2-Wnck perl-Gtk2-Unique perl-Proc-Simple perl-X11-Protocol-Other twolame-libs unrar flash-plugin xulrunner19 xvidcore

(If you copy that command, be sure to remove any line breaks you browser may have put in.)

From there I used the yum list command again to search for ‘stella’. It turned up a package called ‘stella-release’. The _____-release package is used to tell the system what the version and name of the OS is. If you remove it, as I did, it will take with it a lot of packages.

yum remove stella-release

Now.. don’t reboot!

The next step is to install a new _____-release package. In this case, I needed to install the Centos 6.6 release package, which I was able to find in the Centos mirrors. Just find the package and use yum to install it. The file location/name may change slightly overtime but I used this command:

yum install

If yum complains and won’t run, download the packaged and install using the rpm command:

wget && rpm -i centos-release-6-6.el6.centos.12.2.i686.rpm

I then also installed the epel package, which I almost always use with Centos. (Again, the link could be slightly changed for you)

yum install

From here, you need to start rebuilding the OS. Some of the work we’ve done will have broken the system so before a reboot we need to replace many of those packages with official Centos packages or, in most cases, the same packages we just removed but without the stella-release package requirement.

I wanted to make sure I kept the Gnome 2 GUI so I used the yum groupinstall command to rebuild most of those packages.

yum groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts" "Internet Browser"

From here on the system is basically a clean Centos 6 install but with Stella branding and their non-PAE kernel. The kernel I want to keep but if possible, I’d like to remove the branding. This one can be a bit messy. If you try to pull branding you basically uninstall the whole OS.

You have to remove the branding information without checking for dependence, as yum would, and then replace those stella packages with the regular Centos ones.

Firstly, remove the stella branding:

rpm -e --nodeps stella-logos && rpm -e --nodeps stella-bookmarks && rpm -e --nodeps stella-indexhtml

Next, add back in the Centos versions of those packages:

yum install redhat-logos centos-indexhtml redhat-bookmarks

Yum will complain that the rpmdb has been edited and there are missing packages, etc.. But, we know that already and running that command will fix the problems.

Following that, it’s probably a good idea to run a ‘yum check’ command to be sure everything is resolved. It will take a look at your yumdb and let you know if it sees any problems with unresolved dependencies, etc.. It took forever to run on my N5200 but it all came back clear.

If, like me, you’ve gone this route to get around the PAE issue in Centos you’re going to want to update your yum.conf file to tell the system not to install any kernel updates. Any kernel you get from Centos will not work on your machine.

vi /etc/yum.conf

Just find a spot under the [main] header and add the line:


This will just exclude kernel updates. It’s not ideal for security but, at least for me, it’s worth it to have a working Centos 6 system.

Now, we’re very close. Openssh-server has been removed and I wanted it back so I reinstalled it and set it to run on boot.

yum install openssh-server && chkconfig --add sshd

Next, if you want to boot into the GUI, instead of to the terminal, the inittab file needs to be updated.

vi /etc/inittab

The final line says ‘id:3:initdefault:’ but to boot to a gui it needs to be ‘id:5:initdefault:’ instead. Update that 3 to a 5 and save the file.

The very final detail would be to look around for unneeded but still installed packages. You can do a quick search for unneeded libraries:

package-cleanup --leaves

And then uninstall those rpms, which serve no purpose.

I’m sure there are other remnants of Stella lurking around the system but these steps cleaned up my install and brought it back to a pretty good state. I’m happy with it and it’s been very stable.

Sure, I could have just kept the normal Stella install but where’s the fun in that?

Thecus N5200 as a Server

March 12th, 2015

Back in 2007, I bought a Thecus N5200 to use as bulk storage on my network. I’ve spent many years using and fighting with the unit but it wasn’t until my storage needs out grew the 5 SATA disks it could hold that I considering sending it off to be recycled.

It’s a pretty basic unit. From Thecus it arrived with 5 hot swap SATA disk trays and a simple web-based management interface. The interface wasn’t great at telling you exactly what the state of your disks were so I lost my data with this unit more than once. Over time though Thecus added a method to install add ons and little by little people started to write add ons for the unit and actually increased its feature set. You could get an add-on to allow you to access the unit over SSH and to set up rsync file copies to a second location. Overtime, it became very useful and it’s remained in almost constant use.

Recently though, I needed 6 disks to hold my data and it was time to retire the N5200. Just on a whim I did a quick google search to see if anyone had any success ‘hacking’ the unit and doing something neat with it. Several people had soldered a VGA connector to the motherboard. The back of the unit is basically a metal plate with covered up ports for things like PS2 keyboard and mouse ports and a VGA port. As it happens, the motherboard inside the unit has all of the components to use these devices, except the actual connector. I harvested a VGA connector from an old dead graphics card and soldered it to the motherboard of the N5200. (Well, truth be told, I had a fried do the soldering. He was working on a pile of other soldering that day and it was just easier than getting my stuff setup…)

Much to my amazement, when I connected a display to the unit I was immediately greeted with the BIOS boot up process. From here though, things got a bit more complicated.

The graphics chip on the system is, as you would expect, pretty basic. Beyond that though, the unit supports a maximum of 1GB of memory and has an Intel Celeron M processor running at 600Mhz. This is not a speed machine. It’s well equipped to to handle the tasks it was designed for but will struggle to be much use as a desktop.

That being said, with the right software and the right tasks, the unit has life in it. Full screen video and flash is out, but it’s a fine email/web surfing machine and the right display manager works quite well. I’ve had very good success with both Gnome 2 and Mate.

Installing an OS is a bit tricky. The unit will boot from a USB device (things like a thumb drive, or a CD-ROM) and it will also boot from its internal IDE interface. It uses a mini-IDE port and it’s a bit hard to get to. It’s workable but getting the right length of cable and finding a place to store the disk is tough.

Instead, I use a combination of thumb drive and a SATA disk. When installing linux I format and use a thumb drive for the /boot partition and the boot loader but I use one of the 5 SATA trays for the root partition, swap, and anything else you might want. It works well for me but can be a little complex to set up.

I’ve had good success installing current versions of Fedora (to date, up to version 21 works fine) as well as Debian. The only limitation this machine has in terms of what Linux distro you install is the processor’s lack of PAE support. PAE, or Physical Address Extension, is a way for 32bit processors to address RAM beyond 4GB (It’s more complicated than that…). When this machine was built though, there was no reason for Intel to add PAE support to a low power processor designed for mobile devices where speed was not top priority.

My goal for this machine is to install Centos 6, which I think is ideally suited to its hardware and use case. Sadly, Centos 6 requires PAE support and workaround needed to be devised. I’ll post about that in the coming days/weeks. Here’s that post: Install Centos 6 on a Non-PAE Machine

So, how do I use this slow and limited machine? With 4 free hot swap SATA trays, I use it for quite a lot of disk testing and verification. Linux has great tools for both checking a disk’s health, and when it’s time to be recycled, wiping the disks securely. Running 4 disks as once is well within what the unit can handle.

Further, the machine has two working NICs and I often us it as a go between when I’m testing and setting up routers or switches. I can easily connect the device to the N5200 and do whatever tasks I need to without having to take something offline, etc.. It’s also very useful when you need put something online while still keeping it separated from your main network. I can sit in between the problem device and the internet watching the data go by and keeping an eye on things.

Add to that mix light email work, or RSS feeds and it’s a pretty reliable unit to handle tasks you need to offload from another machine for whatever reason.

The Thecus N5200 is small, quiet and more useful the older it gets… or so it would seem.


Portable Pi – Raspberry Pi with a Motorola Lapdock 100

December 17th, 2014

Having read countless accounts online from people who have taken various lapdocks and paired them with a Raspberry Pi, I thought I would give it a try myself.

I went to Ebay and purchased a Motorola Lapdock 100. It’s a small unit with a cable that comes out that back. This cable has a header with a Micro-USB and Micro-HDMI connector on it. General wisdom seems to be buying series of adapters and cables and then sort of whittling them down until they fit. I dutifully bought the necessary bits but when the Lapdock arrived it seemed much easier to just take apart that cable header instead.

What I discovered is that inside that header is simply the two cables held in place with some soft plastic. Having removed the plastic and cutting back the cable, I have two loose cables that I can use freely.

Lapdock Cable Crop

With the addition of two adapters to bring the two micro connectors up to full size, which the Raspberry Pi needs, I was off and running.

Lapdock with Adapters

With that complete, the Lapdock becomes pretty useful. You can use it not just for the Raspberry Pi, but as a keyboard and display unit for any computer, provided you have a couple adapters.

The next hurdle is getting the Lapdock to turn on and off when you want it to. It’s pretty picky about how it works. The first thing to know is that once the screen turns on, you have to allow it time before it will turn on again. For instance, if you power it up and the screen turns on and then back off, it won’t turn on again until you’ve waited a couple of minutes. Just be patient.

I have a little system that works for me to reliably power it on but it’s a bit more complex than would be ideal.

Firstly, I power on the Pi via external power. That is to say, I use a USB port or wall adapter to power on the Pi. Once the Pi is on and I can see disk activity, I then plug in the HDMI cable for the lapdock and the USB cable. Every time I power it up this way, it works.

If I power on the Pi via the USB port on the Lapdock, the screen never shows anything and it just turns back off.

The good news is though, once the Lapdock is showing the Pi on it’s screen you can disconnect the Micro-USB power cable. The Pi will continue to get power via the USB cable from the lapdock, even though it’s not plugged into the ‘power’ USB jack.


Step 0: Unplug everything from the Lapdock, including the power adapter and wait for at least a minute.

Step 1: Power on the Pi using a micro-USB cable into another computer or a wall adapter and count to 5.

Step 2: Plug in the HDMI cable from the lapdock and the USB cable. You don’t need to rush but you should plug in the USB cable within about 10 seconds of the HDMI cable, or the screen will turn off and you have to wait again.

Step 3: Confirm that the Raspberry Pi is displaying on the screen of the lapdock and then disconnect the external power from the Pi.

Lapdock - No External Power

For me, this works every time without issue but there is a little more to it.

As you may know, the Rapsberry Pi checks for an HDMI connection when it turns on. If one isn’t detected it will disable the HDMI port. Since we are powering on the Pi without the HDMI connection, we have to force the Pi to use that connection even though nothing is detected. To that end, a file called ‘config.txt’ needs to be edited.

When you are setting up your Pi from your Mac or PC, you have to copy an image to an SD or MicroSD card. As a part of that process a couple of partitions are made on the card and files copied into them. Do some Googling if curious, others do a much better explanation than I.

One of those created partitions is compatible with Windows and Mac machines and will appear as writable when you insert the memory card into your computer. It’s usually called boot, BOOT, NO NAME, etc. If you see a config.txt file in there you can add these lines to it, or you can simply create the file yourself. Be careful though, it has to be called config.txt and both Windows and Mac computers will try to add other extensions, which they will likely hide from you. I would double check the file name from the command prompt or terminal. (Or from the Pi, assuming you have a different monitor you can hook up to it, it’s located in /boot)

#This line forces the Pi to enable the HDMI connection even if nothing is detected.

#These lines tell the Pi about the display on the Lapdock so that the display looks as good as it can.

#These last two lines are probably optional but I added them when testing.
# Should enable sound via the HDMI connection
#Boosts the signal on the HDMI cable

For me, the config.txt file get’s erased from time to time. When I do a big OS update or a firmware update on the Pi the file will have all of my settings removed.. which causes my Lapdock to not work properly. Without those lines the Pi won’t enable the HDMI port on boot and the Lapdock just shows a blank screen for a few seconds and goes off.

If that happens, just pop the memory card back into your computer and make sure those lines are present. Put the card back into the Pi and get back to happy computing.

Happy Computing!


Installing X-Plane 10 on Fedora 20

July 24th, 2014

I’m a bit of an aviation buff and have been known to spend a fair few hours playing flight sims. My favorite sim these days is X-Plane, and thankfully it runs on Windows, Mac and Linux. My current host for X-Plane is Fedora 20 and since there is a bit of extra effort to make it run properly, I thought I’d outline my process here.

First and foremost – This post assumes you have installed the appropriate graphics drivers and are not running whatever drivers were just included ‘in the box’, as it were. My system has an NVidia GeForce GT 640 and I use the akmod-nvidia drvivers from RPM Fusion, which works great. A quick Google search should turn up an appropriate process for your computer.

Secondly, I am running the 64bit version of Fedora 20 which brings with it some headaches since the installer for X-Plane is 32bit, though they offer game executables in both 32bit and 64bit. Without several 32bit packages installed on the 64bit system, X-Plane is unable to run and complains about graphics issues. By default, Fedora will install the software and drivers for your graphics card in whatever your default architecture is, which for me is 64bit. Since the installer is 32bit, I need to add several 32bit graphics packages so that the 32bit installer can talk to my 64bit hardware.

Confused yet? Don’t worry about. Follow along and you’ll be all set.

The X-Plane 10 release DVDs I have were the very first batch after the game’s release and there is a known problem with the installer. You can grab an updated copy from the X-Plane website or use their free demo.

The first thing to do with the installer is open up a terminal and run it from there but before it will run you’ll need to install some 32bit libraries so the system can execute the program.

yum install mesa-libGL.i686 glibc-2.18.i686 mesa-dri-drivers.i686 xorg-x11-drv-nvidia-libs.i686 sqlite.i686 libXinerama-1.1.3-2.fc20.i686 openal-soft.i686-y

This is, for me, all of the packages I need to install. The system will grab other packages to satisfy the dependencies of those packages, which is fine. Once they’re installed, it still won’t work and complains that it can’t find ‘’. For whatever reason, installing ‘mesa-libGL.i686’ installs a file called ‘’ and X-Plane wants that extra ‘U’ in the file name so a simple symlink will take care of it.

ln -s /usr/lib/ /usr/lib/

It’s a simple fix that takes care of the issue.. not an easy one to figure out since it’s only a one character difference in the name. Those sorts of issues tend to make me a little cross-eyed before I solve them.

This is one further similar issue that turns up when you try to run the installed application. The X-Plane program has slightly different needs than the installer. For the 64bit application to run you’ll need to create this symlink as well:

ln -s /usr/lib64/ /usr/lib64/

With that done you should be able to just run the executable. If you still have that command prompt open though, which is how I run it, you can check to see if you’ve satisfied all of the installers dependencies with the ldd command. My executable is called ‘X-Plane 10 Installer Linux’ so I run:

ldd ./X-Plane 10 Installer Linux in the terminal from the folder containing the installer which gives me a list of needed files and their locations. If you see any lines that say ‘Not Found’ you can use ‘whatprovides’ to find them:

yum whatprovides */ for instance, and yum will output the package name you need to install to satisfy the requirement. Keep an eye out for i686 versus x64_86 packages. On a 64bit system you have to specify .i686 at the end of the package so that it will install the non-default 32bit version, as we did in our first ‘yum install’ command.

With all of that done, just run the command to launch the program and you’ll be all set. For me:

./X-Plane 10 Installer Linux and the ‘X-System’ installer comes up as expected. Make sure you have you DVD in the drive, or mount the ISO from a network share like I do, and you’re off and running.

Once installed, you can choose to run either X-Plane-i386 or X-Plane-x86_64. Both will work though the x86_64 version can take advantage of more than 4GB of RAM, a limitation of i386.

Happy Flying!

Camera Equipment Change… Again.

December 4th, 2013

It seems like every year or so I rethink my photography equipment and make some changes. I’ve gone through many variations of my day-to-day photography kit but I seem to keep reducing the amount of automation present in the system.

My plan for this coming year is to spend as much time with a manual focus, mechanical camera system as I can. I am keeping my Nikon F80s, Nikon F5 and Nikon D7000 but I’ve sold off my spare Nikon N80s, and Nikon FE bodies. When the situation requires digital or highly automated film shooting I will have those bodies available, but my day-to-day system will be based on the Nikon FM.

The Nikon FM is a fully mechanical camera that Nikon built and sold between 1977 and 1982. The camera does take a battery but it powers only the metering system and does not affect the shutter operation. If you have no batteries, the camera still works normally, you’ll just need to meter your scene in some other way. In use, the camera looks very similar to and functions similarly to the Nikon FE but the FE uses the batteries both for metering and for shutter operation; When your batteries start to get low, the camera stops working properly.

I’ve been using a Nikon FM with a 28mm F/2.8 lens as my day-to-day camera for at least a year and I’ve spent some time over the past few weeks researching and expanding my compatible lenses. I am keeping my Nikon 28mm F/2.8 lens, which has become my favorite focal length, but I’ve also added some ‘new’ glass as well.

In my research I discovered that Nikon made a series of lenses, the E Series, that were sold as a more budget friendly line of lenses. In many cases Nikon took the same glass that was in their higher end lenses and packaged them in less robust bodies. In an ironic twist, these lower end lenses are built as well if not better than the new stuff Nikon is selling today. Over time to reduce weight and cost many lenses are now mostly plastic but these lenses are almost totally made of metal. They feel quite solid in the hand and are very smooth to use.

The E Series glass is still seen as ‘lower quality’ today than its counterparts and sells for very low prices. I’ve purchased a 50mm F/1.8 lens, a 135mm F/2.8 lens and a 70-210 F/4 lens for just barely over $100, combined. Together with my 28mm F/2.8 lens and my TC-200 2x tele-converter I should have a fairly flexible kit. I am looking around for something 24mm or wider, as well as a macro lens, and a 300mm F/4.5 lens but they are low priorities. The kit I have assembled shouldn’t have any trouble getting me through this year.

In addition, I’ve also purchased a second Nikon FM body. I usually shoot Kodak Portra 400 @ ISO3200 in the Nikon FM so with the addition of a second body I’ll be able to keep available a slower film for shooting outdoors. The Portra 400 Pushed to 3200 looks great and gives me the speed I need to shoot indoors and in very low light but with the FM’s fastest shutter speed being 1/1000 of a second, slower speed films are a must outdoors.

The next step for this year will be to learn more about lens filters and how they can, and should, be used. Obviously, with the slower shutter speed of these cameras, I’ll be looking into ND filters to reduce the light hitting the film but I also want to look more into the various other correction filters that can be used for many different purposes like color balancing for different lighting situations.

I may do some short post about the various lenses I’ve picked up if I find them interesting in and of themselves. We’ll see…


Building an Inexpensive iSCSI SAN on Centos 6

August 22nd, 2013

I was recently given an HP DL360 G4 server with two 36GB SCSI disks. At first blush the system is of little use in todays tech landscape due to it’s low storage capacity and limited processor feature set. My unit has two Intel Xeon processors at 3.0Ghz without Intel’s VT technology and 4GB of RAM. It would make a good single use server for many functions and would grow in usefulness as it’s storage capacity increased. SCSI hardware has fallen out of general use in recent years and has been superseded by SAS, or Serial Attached SCSI, in much the same way that for desktop machines SATA has replaced PATA. We’ve gotten to a point where moving one stream of data really fast is more efficient than moving several streams together but at lower speed.

With this in mind, I started doing a little research on Ebay for larger capacity SCSI drives that I could swap into this machine. I was able to find mountains of SCSI disks for pretty reasonable prices but again and again the HP Storage Works 30, better known as the HP MSA30, kept popping up. There are loads of them on Ebay these days fully loaded with 36GB, 73GB, or 146GB SCSI disks. I did a little research and discovered that the MSA30 is a 14 disk SCSI enclosure with no real onboard intelligence; It’s just a box with 14 hot-swap SCSI disks and a SCSI port or two on the back. The unit supports the final SCSI protocol that saw any real success in the market, SCSI Ultra-320, meaning it can move data at 320MB/s. The MSA30 had the option of being configured with a single SCSI bus for all 14 disks or two buses, each handling 7 drives. Depending on your needs you would pick the configuration that would suit you best. It is possible to fully saturate a SCSI bus so if you were concerned about the bus being a bottleneck, you could improve you theoretical throughput by splitting the disks across two buses and having 640MB/s of throughput available. My intention is to build a SAN that operates over a Gigabit network though, which means the theoretical maximum speed for me is 120MB/s. Bus saturation is not a concern.

With the MSA30 in mind I began to do some research on the supported SCSI RAID cards available. The DL360 G4 server has two full height PCI-X slots, one is full length and the other is half length. If you’ve never seen a full length card before, they are more than 12″ long and not the size you’re likely to see in a desktop computer. HP was selling the Smart Array 6400 series cards with the MSA30 units. The cards with external ports, the 6402 and 6404, are the ones I was interested in. The 6402 has two external SCSI VHDCI ports and the 6404 has four, both are full length cards to the DL360 G4 can only house one of these cards.

With my research complete I took to Ebay and started buying up parts. When all was said and done, two MSA30s were purchased plus a Smart Array 6402. I also planned to use an Adaptec 29320 card I had lying around. It’s a PCI-X card but it’s not full length and will comfortable fit into the server with the 6402. After a quick stop over at for 3′ SCSI VHDCI cables to connect the MSA30’s to the SCSI cards, it was time to begin the build.

I began by setting up the DL360 with Centos 6, my current favorite linux distribution. I did a minimal installation and added in the necessary bits to install and setup the HP tools I needed for management. I wanted to be able to use the web based RAID configuration tool as well as the other web based system management tools they make available. It wasn’t too hard to do but it’s likely worth it’s own post.¬†From there I started to provision all of this new storage.

One of the MSA30s I purchased was loaded with 14 146GB 10k disks and a single SCSI bus. Since this was to be the lower performance unit I hooked it up to the Adaptec card and disabled all of it’s onboard RAID tools. The unit supports RAID1, RAID1 and RAID10 but it wasn’t able to utilize all 14 disks in a single RAID10, which was my goal, so I used the linux based MDADM software RAID tools instead. I created on large RAID10 which totals just shy of 1TB of storage. Performance testing tells me that I am able to write to the enclosure at, on average, 130MB/s and I can read at 222MB/s. It’s likely that I could do some performance tuning to increase that a bit but since it’s faster than the 1Gb link the iSCSI SAN will run it, there is no reason to.

The second of the MSA30s came with 14 146GB 15k disks and two SCSI buses. This was to be the high performance unit so I wired up each 7 disk group to a port on the Smart Array 6402 and I used the HP RAID config tools to build the RAID10 array. The card I bought has a working battery backup so I enabled write caching, which in concert with the faster disks and dual SCSI buses provided higher performance than the first array. When tested identically to the first array the second MSA30 give an average write speed of 169MB/s. It’s read speed though is twice as fast as the other array and comes in at just about 460MB/s. Had I been using a single SCSI bus that could have been no faster then 320MB/s so there is certainly a performance improvement for the dual-bus enclosure. Depending on the work load you plan to put on the MSA30 getting the dual bus enclosure could certainly make sense.

From there it was a simple matter to configure the NICs on the server and then setup the iSCSI server. I have one NIC on my main network and the second is on a separate network dedicated to just iSCSI traffic. I used this guide ( to setup the iSCSI server though I didn’t use logical volumes.

The HP DL360 G4 server I used was given to me so it cost me nothing but they sell on Ebay these days for $40-$60 dollars. I bought two MSA30 units and I paid $140 a piece for them with free shipping. I paid $18 for the Smart Array 6402 and just shy of $70 for the three SCSI VHDCI cables I needed to wire everything up. In total, the whole system cost me less than $400, which is incredibly cheap for a high performance iSCSI SAN, even if I do say so myself!

« Prev - Next »