Setting Up Nagios – Working with Cisco’s CIMC

September 7th, 2017

This was far more challenging than it needed to be. Cisco makes some SDK’s available for use with Nagios but I was totally unable to make the system see them. They are just Python packages so I didn’t expect much trouble, but I was totally at a loss. I thought, initially, that the issue was that the installer dumped the files into /usr/lib/python/site-packages/ instead of the 64bit path /usr/lib64/python/site-packages/ but no amount of copying and permissions changes made the system able to see the dependencies. For those interested, here is a link to the Cisco Nagios tools (from 09/2017) Nagios Plug-Ins for Cisco UCS. I am not using that package, or its dependencies.

Instead, I found a script written in Go that worked a treat and required no other dependencies to work. You can find it here on Github: check_cisco_ucs

Having never used anything written in Go before I had a few things to learn, but it turned out very simple. I downloaded the ‘check_cisco_ucs.go’ script to my Nagios server. I tried to run it and installed a few bits of software before I understood that I needed to compile the script into an executable. That turned out to be exceedingly simple.

First, I installed the ‘golang-bin’ package, which is in EPEL.

yum install golang-bin

I then navigated to the folder containing the check_cisco_ucs.go script and ran the following command:

go build check_cisco_ucs.go

With that, I was able to run the script. The Github page has a series of example commands which I started firing at some of the Cicso C220 M4 servers I needed to monitor. The script was last updated in 2014 and while much of it worked, not all of the items the author used as examples remain working in the updated BIOS on my systems.

This one worked fine, once you update the IP address and user/password as required. It takes 10-15 seconds to run and then reported back accurate information about the RAID setup and status of the system. The only caveat being, since we run CIMC through https, I needed to add the ‘-M 1.2’ tag so that it would accept TLS 1.2, which CIMC was running.
./check_cisco_ucs -H -M 1.2 -t class -q storageVirtualDrive -a "raidLevel vdStatus health" -e Optimal -u admin -p

With this, I knew the solution would work and I started to go through the steps to make the required checks appear in Nagios. I moved the check_cisco_ucs file into the folder Nagios expects to find the command files, which for me on Centos 7 is /usr/lib64/nagios/plugins.

I then created a new file, called check_cisco_ucs.cfg, and put it into the folder I have configured my Nagios install to look for commands. Within that file I’ve listed my commands, which are a bit messy:

define command {
command_name check_cisco_ucs_storage
command_line $USER1$/check_cisco_ucs -H $HOSTADDRESS$ -M 1.2 -t class -q storageVirtualDrive -a "raidLevel vdStatus health" -e Optimal -u admin -p yourpass

I then added the relevant check to the server object I wanted to check:

define service{
use generic-service
host_name NameInNagios
service_description RAID Controller Status
check_command check_cisco_ucs_storage

Do a quick check of your Nagios config to make sure it’s sane and working:

nagios -v /path/to/nagios.cfg

I was able to do the same for the next example check on the GitHub page, for information about local disks.

From here though, the checks I wanted didn’t work as expected. Reading light status was problematic and the power supply command needed a small tweak.

The author suggested a powersupply command as:
./check_cisco_ucs -H -t class -q equipmentPsu -a "id model operState serial" -e operable -u admin -p pls_change

But, in the 3.0 version of CIMC I’m running, the ‘operState’ option is no longer present and instead I needed to use ‘operability’. My Nagios command is this tweaked version:
$USER1$/check_cisco_ucs -H $HOSTADDRESS$ -M 1.2 -u admin -p yourpass -t class -q equipmentPsu -a "id model operability serial" -e operable

The final check I wanted was to let me know if any of the status lights were amber, instead of green. The author suggested a check to watch one LED to ensure it was green but this reported no information. I instead rewrote the command to look at all of the lights and tell me if any are amber, instead of confirming one light was green.

$USER1$/check_cisco_ucs -H $HOSTADDRESS$ -M 1.2 -u admin -p password -t class -q equipmentIndicatorLed -a “id color name operState” -z -e amber

This command looks at the lights, which gives me this output:
0,green,OVERALL_DIMM_STATUS,on (0 of 6 ok)

The key here, is that the -z means that if the state is NOT found, i.e. there is no amber light, the check is considered Ok. If there is an amber light, the check will fail and alert me. Perfect.

The final hurdle between finding the plugin, and making my checks work was a couple hours effort with the Cisco UCS Rack-Mount Servers Cisco IMC XML API Programmer’s Guide. Using that guide, and some curl commands I was able to finally get the data out I wanted and transform it into commands that plugin could use.

First step was to authenticate to the CIMC:
curl -d "" https://Server IP/nuova -k

That final ‘-k’ is required to make curl ignore the SSL cert on the system, which is just a basic self-signed.

That command outputs a cookie, which you then pass in future commands to remain authenticated.

curl -d "" https://ServerIP/nuova -k

Be careful not to mix ‘ and “. The ” is used to escape the command being sent via curl, the ‘ is used to bind together the variables in that command. Mixing them will cause the commands to fail.

That should be enough to get you going to find any other commands you might be interested in.. I’ll try to fill in some more detail once I’m finished rolling out the Nagios install.

Setting Up Nagios – Installing on Centos 7

September 6th, 2017

Long story short, I changed jobs about 6 months ago and found myself in a new position without any appreciable monitoring of the hardware for which I’m responsible. That needed to change as soon as I started to have time to put some hours into it. I did a little testing and poking around with some other monitoring tools but kept coming back to Nagios as the right choice, despite the mountain of work involved in setting it up. There is such a large community of help and plugins available for it, plus it’s a tool I’ve used at other jobs for many years.

The actual process of standing up a Nagios server is incredibly simple, but that initial server does almost nothing. Let’s go over these initial steps today and I’ll post little updates of the settings and configs I’ve rolled into Nagios over time to monitor different hardware and software platforms.

Step 1 for me was to install a minimal Centos 7 install. I’m a long time user of Centos and this is a default starting point for so many of my projects. After that initial install I usually install a few tools I like to have around and lower some of the security on the system to prevent myself getting stuck solving an issue that isn’t within the application.

For me, I run this command on basically all of my Centos 7 installs:

yum update -y && yum install epel-release -y && yum install htop wget mlocate -y && updatedb && systemctl disable firewalld && vi /etc/selinux/config && reboot

This command does the following:
1.) Updates all packages on the system
2.) Install the EPEL repository, for additional packages I like to have
3.) Installs several packages I like to have, some of which are in the EPEL repo, hence why it’s third in the list.
4.) updatedb is the command that builds a database of files on the system for easy/fast searching. Once the package mlocate is installed you can run it, and then use the ‘locate’ command to find files.
5.) This disables the firewall, which I leave off for the initial config of the machine. Once the machine is working, I switch it back on and open only the ports required. This, for me, is faster then guessing back and forth when something it’s working.. is it the software? Firewall? I’m never sure. I would only do this on a machine not on the public internet.
6.) This command launches vi so I can edit the selinux level on the system. I change it to permissive while I setup the server and then make the necessary change to selinux policy once the system has stabilized. I then move this back to ‘enforcing’. This isn’t a step to take lightly and is very important.
7.) Once you save and quit vi the system will process the final command and reboot the machine.

When the system returns to service you’re ready to start installing and setting up Nagios. With EPEL already setup, it’s just a couple quick commands.

Firstly, you need to install the required packages. This can be done with one command:

yum install nagios nagios-plugins-all

Once done, you need to enable and start the two required servies, httpd (Apache) and nagios.

systemctl enable nagios httpd
systemctl start nagios httpd

Once done, it’s best to setup a password for the default nagiosadmin user, and setup any other users you might want. To setup the nagiosadmin user:

htpasswd /etc/nagios/passwd nagiosadmin

To create a new user:

htpasswd /etc/nagios/passwd (**User Name**)

With this complete you're all set. Nagios is online and working. You can access it on your server at:

http://(**server IP Address**)/nagios

You'll see a couple standard checks running on 'localhost', one of which is in an error state due to how Centos configures httpd. Nothing to be concerned about.. I actually delete that check. Next time, we'll talk more about making Nagios do something useful, like monitoring something beyond itself.

Libresonic on Centos 7 with SSL

June 7th, 2017

I’ve been a happy iTunes Match user since the service was introduced. I have a large music collection and I don’t always want to dedicate the required amount of iPhone and computer storage to keeping it available all the time on all my devices. iTunes Match lets Apple deal with storing the whole thing and allows me to just download what I want on a given device or stream music I own to any device I’d like. It’s been $25/year well spent.

That being said, with streaming music plans taking over the market, I can’t imagine Apple’s going to want to offer this service forever, plus I prefer to self-host as much of my digital needs as possible. Finding a way to replicate these features though has proven a bit tough. There were a few projects that would allow me to stream my music but I don’t want to use up my whole data plan streaming songs over and over again or have my music drop out every time I’m in a dead zone. Thankfully, I found Libresonic, which in concert with an app on my iPhone checks a lot of these boxes.

I installed Libresonic several times before I was happy with the install and I wanted to write-up a quick post so I’ll be able to do it again when the server I’m hosting it on gets replaced.

My final install has Libresonic running on the same web server I use for a few other tasks and with the music library linked via a samba share, instead of copied locally. Additionally, I am proxying the tomcat web server on port 8080 through my apache config, which is setup for SSL.

Step one is to installed OpenJDK and tomcat. This is a pretty simple command:
yum install openjdk tomcat
Once done, it’s basically good to go. You can enable tomcat, which is a java based web server, to start on boot.
systemctl enable tomcat

Now, you need to download the ‘war’ file for Libresonic. Head over to this link:, and download the ‘WAR Version’, not the Standalone Version.

Move the file to where Tomcat is expecting it, which is /var/lib/tomcat/webapps/ on my Centos 7 server. Be sure to rename the file to just ‘libresonic.war’ instead of ‘libresonic-vXX.war’ as the name of the file, before the extension, is the location it will be served from. If you leave the file as ‘libresonic-vXX’ it will be available at http://your.server:8080/libresonic-vXX.

Once that is in place, create a folder and make it owned by the tomcat user for libresonic to use for config data.

mkdir /var/libresonic
chown -Rfv tomcat:tomcat /var/libresonic

Now, we can finally start tomcat, which will bring up Libresonic.

systemctl start tomcat

Now, keep an eye on the logs your system is generating to know when it’s online. Even though tomcat will start, Libresonic will take a minute or more to fully be ready.

journalctl -f

Keep watching until you see something in the log that says how long the app took to start.

Once done, head over to localhost, or your server’s IP, followed by the port and sub-location. Of course, make sure firewall rules are set properly, etc. For instance: http://localhost:8080/libresonic.

With that working, I turned to make it work via my apache config. I am running Apache with a cert from Let’s Encrypt. I edited my ssl.conf file and added the following two lines under the applicable ‘VirtualHost’:

ProxyPass /libresonic
ProxyPassReverse /libresonic

This bit of code takes the internal tomcat server on the localhost and makes it available via Apache, over SSL. It allows the server to respond when I try to access Libresonic. For instance, if I try to access https://my.awesome.server/libresonic Apache is able to re-point that request through the tomcat server, which is hosting at

With that done, I was able to get to the Libresonic page, login, and setup the system.

To give it access to my music collection, I mounted a SMB share from my main storage server and then symlink’d the full path down to a simpler location, to get rid of any issues with folder names with spaces, permissions, etc.

My only recommendation in this area would be to make sure you mount the SMB share, via fstab, with permissions for the tomcat user. In my case I added a few extra items to the config line in the server’s fstab file uid=tomcat,gid=tomcat to make sure the mounted share can be read by the user that will be accessing it. Then, when I posted the libresonic server the relevant folder, it was able to read the data there.

With that all done, I was all set. I can access my server, via SSL, and use apps on my smart phone to talk to the server as well as download music for offline listening. I am, right now, using SubStreamer on my iPhone and on Android.

When the Photo Bug Bites

May 24th, 2017

I remember the first time I saw a moment that I knew I had to photograph. I wonder if other photographers are the same.

I was in Maine, spending a weekend with a friend at his lake side cabin. I woke up on a cool late autumn morning and the whole lake was covered in rising mist. The water was much warmer than the cool dry air and up came the moisture.

My only camera was a 3 megapixel ‘Kodak Easy Share’ I had gotten for Christmas a few years before. This was turn of the century, inexpensive, pocket camera technology at its most average. It was a fine snapshot camera, and I was glad to have received it as a gift, but I never expected that misty lake would impact my life to such a degree.

After that trip, I did a bit of research and bought myself a little Canon point and shoot camera. It was 8 mega pixels and had a bit of zoom ability built-in. I fully expected to love that camera and never need to upgrade again. In the end though, I found it very frustrating.

I didn’t know the massive amount of information I didn’t know at that point and was very disappointed. I never took any pictures with that camera that I really liked. The zoom and the image quality were fine, in good lighting, but the lack of image stabilization was a real issue; I didn’t even know it existed as a technology at that time. I found the controls, and the general slowness of the camera, just as painful as the Kodak I had been using. I carried it around with me, but I never really enjoyed it. My interest waned.

Then, Christmas Eve came along. One of the pastors of my church had bought a Nikon D80 but couldn’t use it during the service since he need to be up front. He said I could use it if I wanted to. I was on sound and projection duty up in the balcony so I was in a pretty good spot to take some pictures of the service and the traditional candle lighting.

I’d never even seen an SLR camera up close so he gave me a few pointers and showed me how to zoom, the focus points, etc., and then went to start the service.

I put my eye to the eye piece, zoomed in a bit and pressed the shutter button. Much to my surprise, the camera actually fired when I told it to, not a few seconds later, and the image that popped up on the back of the camera was exactly what I was expecting.

It was immediately clear to me that my foray into point and shoot cameras was over and that a DSLR would need to be purchased. Finally, I found a camera that got out of my way and let me capture the images I could see in my mind, instead of the blurry, delayed mess from my point and shoots.

And so began what is now 12 years and counting of shooting Nikon. First, the D40. Then the D300. What a beast! I loved that thing! I sold it though and replaced it with a D7000, in preparation for lending it to my dad for a trip to Alaska. Sending him on a once in a lifetime trip with a camera lacking any auto-modes didn’t seem wise. I lent the D40 to a friend and it was unfortunately stolen, but I kept the D7000 for several years. I never bonded with it like I had my D40 and D300, and finally, due in no small part to the persistence of my wife, pulled the trigger on the D500.

What a delight that machine is. I am happy to invest the time in configuring the camera, and learning how to quickly reconfigure it as my subjects change, but once I’ve set it up to suit the situation I just want it to get out of the way. The D500 does that for me, without the full weight and expense of full frame bodies and lenses.

Automated Backups of OpnSense

January 31st, 2017

I use rsnapshot, on Centos 7, to manage the vast majority of my backups across a myriad of linux servers both within my network and across the globe. I’ve never blogged about the entirety of that process, but I wanted to post a quick note about how I use rsnapshot to also backup the configuration of my router.

Until recently, I had been using this process to backup my pfSense routers. With my switch to OPNsense though, I was pleased to see the process is the same.

Basically, we just need to make a copy of a single folder on the machine, ‘/conf’, which is located right off of the root on both pfSense an OpnSense.

Step one is to make sure you have a user setup on the router that can connect over SSH. For OpnSense you need to log into the web portal and navigate to System -> Settings -> Administration. Check the box to ‘Enable Secure Shell’ and ‘Permit Password login’. (We’ll disable password logins in a moment)

Next, goto System -> Access -> Users and create a new user that you will use for SSH access. Once created, you then need to also give the user the ‘effective permission’ of ‘System: Shell Account Access’ which is just below the ‘Group Membership’ Section of the user’s account details.

Now, to allow rsnapshot to login as the user without prompting for a password you need to copy over to the router the public SSH key from the machine and user that runs rsnapshot. I prefer to use the ‘ssh-copy-id’ tool to do that. On your rsnapshot server, logged in as the user who runs rsnapshot, run ‘ssh-copy-id username@router_ip’. Enter your password and it will copy the required keys. The username to use is the one you created in the previous step on the router. The IP address should be the IP address of the router. I don’t allow external SSH access to my router, so I use the internal IP.

You can verify that the copy-id tool worked by trying to ssh into the router ssh username@router_ip. It should take a moment, and then drop you at a terminal prompt on the router. You can now take a look around and see what the system has to offer. You might do cd /conf and then ls to see the config file and history.

You can now disable ‘Permit Password Login’, since the user required for running the backup is setup to login without interactively entering a password.

With that part done you now need only update your rsnapshot.conf file; Don’t forget that the file uses ‘‘ to delineate options, not spaces, commas, etc..

I added a line to my rsnapshot.conf that says the following:

backup username@router_ip:/conf/ WDR-Router

The ‘WDR-Router’ piece at the end is the folder the resulting files are kept in with the rsnapshot folder structure on my server.

With that done, your router config files will be included in your backup schedule. Should something happen to your router, like hardware failure or a bad config gets pushed out, you can quickly recover.

Worst case scenario, you can just setup a new, clean OpnSense install and load your most recent config.xml file through the web portal via System -> Configuration -> Backups.

Good luck!

Cleaning up old Logs on Centos 7

January 16th, 2017

As often happens with computers of all types, log files build up over time. Generally speaking, the operating system will rotate these logs, which means it breaks them up into chunks, but it isn’t usually set to remove the old chunks. Over time, those log files can start to add up.

I am giving some thought to changing the provider of the VPS that hosts this web page, since performance is not consistent and the VPS is not very reliable, and I was curious to know how much disk space I needed for the system to run well. After doing some light clean up on the system I did a check to see where on the disk I was using the most space.

Using the following command, I did a search to find out which folders contained the most data:

du -a / | sort -n -r | head -n 10

This command reported that the folders using the most storage on my system were the folders containing log files, ‘/var/log/’. I cleaned a few of them up by hand, since I like to hold onto certain items for longer than others, but one folder in particular, /var/log/journal/ contained several GBs of data. This is the folder that holds the log files SystemD creates and contains basically every log entry for the system since its initial boot. I considered cleaning it up manually, but I wanted to know if I could keep this data in check going forward, which would help me run the webserver on a smaller disk. What I discoverd was a simple command to clear out logs older than a certain date, or larger then a certain size.

journalctl --vacuum-time=14d
journalctl --vacuum-size=1G

The above command looks through all the log files and purges any that are either old than the time set, or larger than the limit set.

I ran the command above and told it to keep the most recent 14 days worth of logs, which deleted 5.9GB of old log data on my system. I then added the command to crontab so it would maintain the logs at that level:

2 1 * * * root journalctl –vacuum-time=14d

This command runs the vacuum command each day at 2 minutes past 1AM.

My server now uses 2GB of storage, instead of 8GB. Not a bad result…

Interesting Failure Modes – SD Card Arching

August 22nd, 2016

I’ve been using Raspberry Pi computers for several years. A few months ago the Raspberry Pi B I had hooked up to my TV and running OpenElec, stopped working. The Pi was locked up and then wouldn’t reboot when I pulled the power cord.

It’s an original B model, with just two USB ports, so I didn’t think much of it. I prepped and put in place a replacement Pi, also running OpenElec, and threw the Pi into a drawer for further inspection..later. I got 4 years faithful service from a $35 computer so I wasn’t complaining.

Some weeks later, I had an idea that I wanted to use a Raspberry Pi for so I grabbed the questionable unit and got to work.

My standard way to prep a Pi is to use raspbian-ua-netinst ( It doesn’t require a huge download, and it doesn’t automatically install a GUI and a bunch of software I don’t want.

I knew that this Pi, and this same SD card, were from the failed system so I started by running some tests on the SD card before attempting the install. I didn’t have any trouble with the SD card, so I proceeded. Much to my surprise, the Pi booted the new SD card and completed its installation without any issue.

Bit of a head scratcher but it was working.

Since then the Raspberry Pi has been handling duties as a simple Stratum 2 NTP server (More on that in a later post…). It’s not seeing heavy load but it’s been working fine.

This morning, while logged into the Pi and monitoring NTP, the system just stopped being able to see the SD card. Since I was already logged into the system via ssh, my session stayed active but I couldn’t run any programs and my simple ntp watching command (watch -n 1 ntpq -p), started to return a blank screen. I quit the watch app, tried to ‘ls’ and ‘cd’ but the system just said command not found. I wasn’t even able to shut down normally, since the shutdown app wasn’t available either.

I pulled the SD from the Pi and plugged it into another SD card reader on a different system. Everything looked Ok and I was able to run fsck without any errors. I plugged the SD card back into the Pi and it didn’t even try to boot.

I went through a couple other little tasks, extra fsck runs, etc., and then, while holding the SD card, I noticed that the interface side was slightly arched. When place on a flat surface, contacts down, the card was raised in the middle. Obviously, this minor elevation change had stopped all of the contacts from making a good connection to the Pi.

I inserted the SD card back into the Pi, and while holding the center of the card down flat against the contacts, I applied power and was greeted with a normally booting system.


I pulled the card out and flexed it, just a little, a few times until it sat more flat and the Pi is now working again normally.

The Raspberry Pi is working great but the SD card has started to lose it shape after 4 years of near constant use. I’ve been toying with the idea of using an old 1U server case to rack mount a couple of Raspberry Pis and then using the SD card just for booting, using an external USB disk to host the OS. That idea is starting to make even more sense to me now. That being said, I suspect the SD card arching issue would be less likely to occur on all of the newer Pi models, which use MicroSD cards instead. They aren’t made from multiple pieces of plastic clipped together, like most SD cards.

Just an interesting failure mode that I thought ought to be shared…


How To: A Clean install of Windows 7 in 2016

April 25th, 2016

Maybe I’m alone here, but I still have occasion to install Windows 7 from time to time. For a while, that was no big deal; Install the OS, update, update, update and you’re good to go. Lately though, the process has become far more cumbersome. Not only does the process take ages to complete, but when you’re done its always nagging you to update to Windows 10. What follows is my process for getting this done without too much hassle.

Firstly, I’ve not created anything new here. I’ve simply found a series of tools and bit of information online that helped me along the way so I’m compiling it here for my future benefit.

Step 1 – Get the OS installed. After that, if your copy of Windows 7 doesn’t including Service Pack 1, go here and download it: Install. Reboot. Service packs include a large number of updates so it’s much faster to get the service pack installed first thing after the install than it is to use Windows Update to find it and download it.

Step 2 – There is a known issue with Windows 7 that causes the search for updates to take hours and sometimes days. You have the option of opening ‘Windows Update’ and telling it to check for updates manually but I find it far easier to tell the system to check for updates on its own and then leave the machine alone for 24 hours. It seems a bizarre step to take, and I often do waste a day waiting for the manual checking for updates to complete before seeing sense, rebooting and leaving it alone. In my experience, doing the manual check in a VM takes far longer than on physical hardware, regardless of the specs of the two machines.

Step 3 – Log in the next day and open Windows Update. It should have hundreds of updates ready for you to install. The next wrinkle I seem to run into is that clicking ‘Install Updates’ seems to sit there and think forever without actually doing anything. It just says ‘Preparing to install updates’. If this happens, use the Windows 7 shutdown option that includes installing updates. For whatever reason, this seems to work.

Step 4 – After the shutdown, power the computer back one and do a check for updates. At this point a manual check usually completes in a reasonable amount of time and you’re able to complete the updates as normal. It’s also around this time that update to Windows 10 nagging begins. Push forward though and install all of the available updates. The only thing I do at this point is ‘hide’ all of the language pack updates. Click on the top one and then shift click the bottom one to select them all. Right click any one of them and select ‘Hide Update’. They will grey out and not appear at your next check.

Step 5 – Time to clean up. Check again for updates and make sure you got them all. If not, install whatever is available. When you’re sure all updates have been installed, head over to this link (Sorry for the language…), and save the .bat file somewhere on your machine.

Be careful here. Running scripts from the web is pretty dangerous. Take a few minutes to read over it and make sure you understand what it’s doing. The script should basically be a series of wmic and wusa commands to remove updates, identified by their KB numbers. It will print out information about each update it removes as it goes along.

The batch file is a script someone else wrote that will uninstall all of the Windows 10 nagging updates, as well as the new telemetry ‘features’ they added to Windows 7, similar to the tracking features added in Windows 10. I didn’t write script and I don’t have any control over it…

Right click on the script and run it as an admin. Read what it says and then let it do it’s thing. It will pause when it’s complete. When it’s done, reboot.

Alternatively, if you don’t mind the new tracking features in Windows 7 and only want to get rid of the Windows 10 update nag, you can remove KB3035583, hide it from Windows Update and skip all of the script related procedures. Updates can be manually removed from the ‘Programs and Features’ section of Control Panel. Click ‘View Installed Updates’ on the left hand side, find the update that says ‘KB3035583’ and uninstall it. After it’s removed, reboot, check for updates and then hide the KB3035583 update in Windows Update.

Step 6 – Check for updates! Since we just uninstalled some Windows updates, there should be ‘new’ updates listed for installation. Obviously, we don’t want to install them so we’ll need to tell Windows to hide them. After the update search is complete, compare the updates listed for installation against the updates removed in the script.

Right click the script and choose ‘edit’. This will open it Notepad and you can compare the KB numbers. You’ll want to ‘hide’ any available updates that are listed in the script. Right click the update and select ‘Hide Update’. This will grey it out and it won’t show up when you search for updates next time.

Ideally, once you’re done hiding the updates correlating to the script, none should be left for installation. If some updates remain, get them installed and reboot.

Step 7 – Run the script one more time. This time, confirm that after each item it says ‘No Instance(s) Available’. This means it wasn’t able to remove the update, because it wasn’t installed. When each update shows that, you’re good to go. You won’t be nagged about Windows 10, you won’t have any of the new tracking ‘features’. You’ll (finally) be ready to set your machine up to do whatever task you needed.

Enjoy Windows 7!


Install Centos 6 on a Non-PAE Machine

March 12th, 2015

I have a Thecus N5200 that was modified to have a VGA port. Though the machine will run a variety of current Linux distributions, I wanted it to run Centos 6. Unfortunately, the N5200 doesn’t support PAE, which Centos 6 requires.

The first major problem is that a Non-PAE machine won’t even boot the Centos installer CD/DVD. You have to find some way around that. There are several ways to get around that but they are all quite complex and time-consuming. Plus, as time goes on they work less and less. The old software needed is harder and harder to find. I instead chose to simply install Stella. It’s a Desktop focused Linux distribution that is based on Centos 6 and the 32bit version includes a Non-PAE kernel. If you’re looking to install a Centos 6 desktop, install Stella and you’re all set. It’s great. The developer did a great job keeping things compatible with Centos 6 while also adding in things like video codecs and the like.

For my little project with the N5200 though I wanted to go a little further. I wanted to install Stella, to get around the NON-PAE requirement, and then convert it back to a standard Centos 6 installation… or close to it.

I booted the Stella DVD and installed it to disk. Once I booted into it from my hard drive I set about searching for and removing packages not part of the normal Centos install.

The first thing I did was search for installed packages from repos outside of a normal Centos installation. I used this command initially

yum list installed

to get a look at everything installed and where it was installed from. From there I started to use grep as well to filter things down.

yum list installed|grep dex

Many of the packages were from a repo that included the word ‘dextop’ so I filtered the list to show me those by piping the output to ‘grep dex’.

With a list packages, I use the ‘yum remove’ command to remove them:

yum remove a52dec amrwb b43-firmware bluecurve-metacity-theme faac ffmpeg-libs gnome-web-photo goocanvas faad2-libs gstreamer-plugins-bad gtkimageview lame-libs libdca libdvdcss libfribidi libmad libmimic libmms libmpeg2 libsidplay mjpegtools-libs nux-dextop-release opencore-amr opencore-amr perl-Gnome2-Canvas perl-Gnome2-VFS perl-Gnome2-Wnck perl-Gtk2-Unique perl-Proc-Simple perl-X11-Protocol-Other twolame-libs unrar flash-plugin xulrunner19 xvidcore

(If you copy that command, be sure to remove any line breaks you browser may have put in.)

From there I used the yum list command again to search for ‘stella’. It turned up a package called ‘stella-release’. The _____-release package is used to tell the system what the version and name of the OS is. If you remove it, as I did, it will take with it a lot of packages.

yum remove stella-release

Now.. don’t reboot!

The next step is to install a new _____-release package. In this case, I needed to install the Centos 6.6 release package, which I was able to find in the Centos mirrors. Just find the package and use yum to install it. The file location/name may change slightly overtime but I used this command:

yum install

If yum complains and won’t run, download the packaged and install using the rpm command:

wget && rpm -i centos-release-6-6.el6.centos.12.2.i686.rpm

I then also installed the epel package, which I almost always use with Centos. (Again, the link could be slightly changed for you)

yum install

From here, you need to start rebuilding the OS. Some of the work we’ve done will have broken the system so before a reboot we need to replace many of those packages with official Centos packages or, in most cases, the same packages we just removed but without the stella-release package requirement.

I wanted to make sure I kept the Gnome 2 GUI so I used the yum groupinstall command to rebuild most of those packages.

yum groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts" "Internet Browser"

From here on the system is basically a clean Centos 6 install but with Stella branding and their non-PAE kernel. The kernel I want to keep but if possible, I’d like to remove the branding. This one can be a bit messy. If you try to pull branding you basically uninstall the whole OS.

You have to remove the branding information without checking for dependence, as yum would, and then replace those stella packages with the regular Centos ones.

Firstly, remove the stella branding:

rpm -e --nodeps stella-logos && rpm -e --nodeps stella-bookmarks && rpm -e --nodeps stella-indexhtml

Next, add back in the Centos versions of those packages:

yum install redhat-logos centos-indexhtml redhat-bookmarks

Yum will complain that the rpmdb has been edited and there are missing packages, etc.. But, we know that already and running that command will fix the problems.

Following that, it’s probably a good idea to run a ‘yum check’ command to be sure everything is resolved. It will take a look at your yumdb and let you know if it sees any problems with unresolved dependencies, etc.. It took forever to run on my N5200 but it all came back clear.

If, like me, you’ve gone this route to get around the PAE issue in Centos you’re going to want to update your yum.conf file to tell the system not to install any kernel updates. Any kernel you get from Centos will not work on your machine.

vi /etc/yum.conf

Just find a spot under the [main] header and add the line:


This will just exclude kernel updates. It’s not ideal for security but, at least for me, it’s worth it to have a working Centos 6 system.

Now, we’re very close. Openssh-server has been removed and I wanted it back so I reinstalled it and set it to run on boot.

yum install openssh-server && chkconfig --add sshd

Next, if you want to boot into the GUI, instead of to the terminal, the inittab file needs to be updated.

vi /etc/inittab

The final line says ‘id:3:initdefault:’ but to boot to a gui it needs to be ‘id:5:initdefault:’ instead. Update that 3 to a 5 and save the file.

The very final detail would be to look around for unneeded but still installed packages. You can do a quick search for unneeded libraries:

package-cleanup --leaves

And then uninstall those rpms, which serve no purpose.

I’m sure there are other remnants of Stella lurking around the system but these steps cleaned up my install and brought it back to a pretty good state. I’m happy with it and it’s been very stable.

Sure, I could have just kept the normal Stella install but where’s the fun in that?

Thecus N5200 as a Server

March 12th, 2015

Back in 2007, I bought a Thecus N5200 to use as bulk storage on my network. I’ve spent many years using and fighting with the unit but it wasn’t until my storage needs out grew the 5 SATA disks it could hold that I considering sending it off to be recycled.

It’s a pretty basic unit. From Thecus it arrived with 5 hot swap SATA disk trays and a simple web-based management interface. The interface wasn’t great at telling you exactly what the state of your disks were so I lost my data with this unit more than once. Over time though Thecus added a method to install add ons and little by little people started to write add ons for the unit and actually increased its feature set. You could get an add-on to allow you to access the unit over SSH and to set up rsync file copies to a second location. Overtime, it became very useful and it’s remained in almost constant use.

Recently though, I needed 6 disks to hold my data and it was time to retire the N5200. Just on a whim I did a quick google search to see if anyone had any success ‘hacking’ the unit and doing something neat with it. Several people had soldered a VGA connector to the motherboard. The back of the unit is basically a metal plate with covered up ports for things like PS2 keyboard and mouse ports and a VGA port. As it happens, the motherboard inside the unit has all of the components to use these devices, except the actual connector. I harvested a VGA connector from an old dead graphics card and soldered it to the motherboard of the N5200. (Well, truth be told, I had a fried do the soldering. He was working on a pile of other soldering that day and it was just easier than getting my stuff setup…)

Much to my amazement, when I connected a display to the unit I was immediately greeted with the BIOS boot up process. From here though, things got a bit more complicated.

The graphics chip on the system is, as you would expect, pretty basic. Beyond that though, the unit supports a maximum of 1GB of memory and has an Intel Celeron M processor running at 600Mhz. This is not a speed machine. It’s well equipped to to handle the tasks it was designed for but will struggle to be much use as a desktop.

That being said, with the right software and the right tasks, the unit has life in it. Full screen video and flash is out, but it’s a fine email/web surfing machine and the right display manager works quite well. I’ve had very good success with both Gnome 2 and Mate.

Installing an OS is a bit tricky. The unit will boot from a USB device (things like a thumb drive, or a CD-ROM) and it will also boot from its internal IDE interface. It uses a mini-IDE port and it’s a bit hard to get to. It’s workable but getting the right length of cable and finding a place to store the disk is tough.

Instead, I use a combination of thumb drive and a SATA disk. When installing linux I format and use a thumb drive for the /boot partition and the boot loader but I use one of the 5 SATA trays for the root partition, swap, and anything else you might want. It works well for me but can be a little complex to set up.

I’ve had good success installing current versions of Fedora (to date, up to version 21 works fine) as well as Debian. The only limitation this machine has in terms of what Linux distro you install is the processor’s lack of PAE support. PAE, or Physical Address Extension, is a way for 32bit processors to address RAM beyond 4GB (It’s more complicated than that…). When this machine was built though, there was no reason for Intel to add PAE support to a low power processor designed for mobile devices where speed was not top priority.

My goal for this machine is to install Centos 6, which I think is ideally suited to its hardware and use case. Sadly, Centos 6 requires PAE support and workaround needed to be devised. I’ll post about that in the coming days/weeks. Here’s that post: Install Centos 6 on a Non-PAE Machine

So, how do I use this slow and limited machine? With 4 free hot swap SATA trays, I use it for quite a lot of disk testing and verification. Linux has great tools for both checking a disk’s health, and when it’s time to be recycled, wiping the disks securely. Running 4 disks as once is well within what the unit can handle.

Further, the machine has two working NICs and I often us it as a go between when I’m testing and setting up routers or switches. I can easily connect the device to the N5200 and do whatever tasks I need to without having to take something offline, etc.. It’s also very useful when you need put something online while still keeping it separated from your main network. I can sit in between the problem device and the internet watching the data go by and keeping an eye on things.

Add to that mix light email work, or RSS feeds and it’s a pretty reliable unit to handle tasks you need to offload from another machine for whatever reason.

The Thecus N5200 is small, quiet and more useful the older it gets… or so it would seem.


« Prev - Next »