Libresonic on Centos 7 with SSL

June 7th, 2017

I’ve been a happy iTunes Match user since the service was introduced. I have a large music collection and I don’t always want to dedicate the required amount of iPhone and computer storage to keeping it available all the time on all my devices. iTunes Match lets Apple deal with storing the whole thing and allows me to just download what I want on a given device or stream music I own to any device I’d like. It’s been $25/year well spent.

That being said, with streaming music plans taking over the market, I can’t imagine Apple’s going to want to offer this service forever, plus I prefer to self-host as much of my digital needs as possible. Finding a way to replicate these features though has proven a bit tough. There were a few projects that would allow me to stream my music but I don’t want to use up my whole data plan streaming songs over and over again or have my music drop out every time I’m in a dead zone. Thankfully, I found Libresonic, which in concert with an app on my iPhone checks a lot of these boxes.

I installed Libresonic several times before I was happy with the install and I wanted to write-up a quick post so I’ll be able to do it again when the server I’m hosting it on gets replaced.

My final install has Libresonic running on the same web server I use for a few other tasks and with the music library linked via a samba share, instead of copied locally. Additionally, I am proxying the tomcat web server on port 8080 through my apache config, which is setup for SSL.

Step one is to installed OpenJDK and tomcat. This is a pretty simple command:
yum install openjdk tomcat
Once done, it’s basically good to go. You can enable tomcat, which is a java based web server, to start on boot.
systemctl enable tomcat

Now, you need to download the ‘war’ file for Libresonic. Head over to this link: http://libresonic.github.io/download/, and download the ‘WAR Version’, not the Standalone Version.

Move the file to where Tomcat is expecting it, which is /var/lib/tomcat/webapps/ on my Centos 7 server. Be sure to rename the file to just ‘libresonic.war’ instead of ‘libresonic-vXX.war’ as the name of the file, before the extension, is the location it will be served from. If you leave the file as ‘libresonic-vXX’ it will be available at http://your.server:8080/libresonic-vXX.

Once that is in place, create a folder and make it owned by the tomcat user for libresonic to use for config data.

mkdir /var/libresonic
chown -Rfv tomcat:tomcat /var/libresonic

Now, we can finally start tomcat, which will bring up Libresonic.

systemctl start tomcat

Now, keep an eye on the logs your system is generating to know when it’s online. Even though tomcat will start, Libresonic will take a minute or more to fully be ready.

journalctl -f

Keep watching until you see something in the log that says how long the app took to start.

Once done, head over to localhost, or your server’s IP, followed by the port and sub-location. Of course, make sure firewall rules are set properly, etc. For instance: http://localhost:8080/libresonic.

With that working, I turned to make it work via my apache config. I am running Apache with a cert from Let’s Encrypt. I edited my ssl.conf file and added the following two lines under the applicable ‘VirtualHost’:


ProxyPass /libresonic http://127.0.0.1:8080/libresonic
ProxyPassReverse /libresonic http://127.0.0.1:8080/libresonic

This bit of code takes the internal tomcat server on the localhost and makes it available via Apache, over SSL. It allows the server to respond when I try to access Libresonic. For instance, if I try to access https://my.awesome.server/libresonic Apache is able to re-point that request through the tomcat server, which is hosting at http://127.0.0.1:8080/libresonic.

With that done, I was able to get to the Libresonic page, login, and setup the system.

To give it access to my music collection, I mounted a SMB share from my main storage server and then symlink’d the full path down to a simpler location, to get rid of any issues with folder names with spaces, permissions, etc.

My only recommendation in this area would be to make sure you mount the SMB share, via fstab, with permissions for the tomcat user. In my case I added a few extra items to the config line in the server’s fstab file uid=tomcat,gid=tomcat to make sure the mounted share can be read by the user that will be accessing it. Then, when I posted the libresonic server the relevant folder, it was able to read the data there.

With that all done, I was all set. I can access my server, via SSL, and use apps on my smart phone to talk to the server as well as download music for offline listening. I am, right now, using SubStreamer on my iPhone and on Android.

When the Photo Bug Bites

May 24th, 2017

I remember the first time I saw a moment that I knew I had to photograph. I wonder if other photographers are the same.

I was in Maine, spending a weekend with a friend at his lake side cabin. I woke up on a cool late autumn morning and the whole lake was covered in rising mist. The water was much warmer than the cool dry air and up came the moisture.

My only camera was a 3 megapixel ‘Kodak Easy Share’ I had gotten for Christmas a few years before. This was turn of the century, inexpensive, pocket camera technology at its most average. It was a fine snapshot camera, and I was glad to have received it as a gift, but I never expected that misty lake would impact my life to such a degree.

After that trip, I did a bit of research and bought myself a little Canon point and shoot camera. It was 8 mega pixels and had a bit of zoom ability built-in. I fully expected to love that camera and never need to upgrade again. In the end though, I found it very frustrating.

I didn’t know the massive amount of information I didn’t know at that point and was very disappointed. I never took any pictures with that camera that I really liked. The zoom and the image quality were fine, in good lighting, but the lack of image stabilization was a real issue; I didn’t even know it existed as a technology at that time. I found the controls, and the general slowness of the camera, just as painful as the Kodak I had been using. I carried it around with me, but I never really enjoyed it. My interest waned.

Then, Christmas Eve came along. One of the pastors of my church had bought a Nikon D80 but couldn’t use it during the service since he need to be up front. He said I could use it if I wanted to. I was on sound and projection duty up in the balcony so I was in a pretty good spot to take some pictures of the service and the traditional candle lighting.

I’d never even seen an SLR camera up close so he gave me a few pointers and showed me how to zoom, the focus points, etc., and then went to start the service.

I put my eye to the eye piece, zoomed in a bit and pressed the shutter button. Much to my surprise, the camera actually fired when I told it to, not a few seconds later, and the image that popped up on the back of the camera was exactly what I was expecting.

It was immediately clear to me that my foray into point and shoot cameras was over and that a DSLR would need to be purchased. Finally, I found a camera that got out of my way and let me capture the images I could see in my mind, instead of the blurry, delayed mess from my point and shoots.

And so began what is now 12 years and counting of shooting Nikon. First, the D40. Then the D300. What a beast! I loved that thing! I sold it though and replaced it with a D7000, in preparation for lending it to my dad for a trip to Alaska. Sending him on a once in a lifetime trip with a camera lacking any auto-modes didn’t seem wise. I lent the D40 to a friend and it was unfortunately stolen, but I kept the D7000 for several years. I never bonded with it like I had my D40 and D300, and finally, due in no small part to the persistence of my wife, pulled the trigger on the D500.

What a delight that machine is. I am happy to invest the time in configuring the camera, and learning how to quickly reconfigure it as my subjects change, but once I’ve set it up to suit the situation I just want it to get out of the way. The D500 does that for me, without the full weight and expense of full frame bodies and lenses.

Automated Backups of OpnSense

January 31st, 2017

I use rsnapshot, on Centos 7, to manage the vast majority of my backups across a myriad of linux servers both within my network and across the globe. I’ve never blogged about the entirety of that process, but I wanted to post a quick note about how I use rsnapshot to also backup the configuration of my router.

Until recently, I had been using this process to backup my pfSense routers. With my switch to OPNsense though, I was pleased to see the process is the same.

Basically, we just need to make a copy of a single folder on the machine, ‘/conf’, which is located right off of the root on both pfSense an OpnSense.

Step one is to make sure you have a user setup on the router that can connect over SSH. For OpnSense you need to log into the web portal and navigate to System -> Settings -> Administration. Check the box to ‘Enable Secure Shell’ and ‘Permit Password login’. (We’ll disable password logins in a moment)

Next, goto System -> Access -> Users and create a new user that you will use for SSH access. Once created, you then need to also give the user the ‘effective permission’ of ‘System: Shell Account Access’ which is just below the ‘Group Membership’ Section of the user’s account details.

Now, to allow rsnapshot to login as the user without prompting for a password you need to copy over to the router the public SSH key from the machine and user that runs rsnapshot. I prefer to use the ‘ssh-copy-id’ tool to do that. On your rsnapshot server, logged in as the user who runs rsnapshot, run ‘ssh-copy-id username@router_ip’. Enter your password and it will copy the required keys. The username to use is the one you created in the previous step on the router. The IP address should be the IP address of the router. I don’t allow external SSH access to my router, so I use the internal IP.

You can verify that the copy-id tool worked by trying to ssh into the router ssh username@router_ip. It should take a moment, and then drop you at a terminal prompt on the router. You can now take a look around and see what the system has to offer. You might do cd /conf and then ls to see the config file and history.

You can now disable ‘Permit Password Login’, since the user required for running the backup is setup to login without interactively entering a password.

With that part done you now need only update your rsnapshot.conf file; Don’t forget that the file uses ‘‘ to delineate options, not spaces, commas, etc..

I added a line to my rsnapshot.conf that says the following:

backup username@router_ip:/conf/ WDR-Router

The ‘WDR-Router’ piece at the end is the folder the resulting files are kept in with the rsnapshot folder structure on my server.

With that done, your router config files will be included in your backup schedule. Should something happen to your router, like hardware failure or a bad config gets pushed out, you can quickly recover.

Worst case scenario, you can just setup a new, clean OpnSense install and load your most recent config.xml file through the web portal via System -> Configuration -> Backups.

Good luck!

Cleaning up old Logs on Centos 7

January 16th, 2017

As often happens with computers of all types, log files build up over time. Generally speaking, the operating system will rotate these logs, which means it breaks them up into chunks, but it isn’t usually set to remove the old chunks. Over time, those log files can start to add up.

I am giving some thought to changing the provider of the VPS that hosts this web page, since performance is not consistent and the VPS is not very reliable, and I was curious to know how much disk space I needed for the system to run well. After doing some light clean up on the system I did a check to see where on the disk I was using the most space.

Using the following command, I did a search to find out which folders contained the most data:

du -a / | sort -n -r | head -n 10

This command reported that the folders using the most storage on my system were the folders containing log files, ‘/var/log/’. I cleaned a few of them up by hand, since I like to hold onto certain items for longer than others, but one folder in particular, /var/log/journal/ contained several GBs of data. This is the folder that holds the log files SystemD creates and contains basically every log entry for the system since its initial boot. I considered cleaning it up manually, but I wanted to know if I could keep this data in check going forward, which would help me run the webserver on a smaller disk. What I discoverd was a simple command to clear out logs older than a certain date, or larger then a certain size.

journalctl --vacuum-time=14d
or
journalctl --vacuum-size=1G

The above command looks through all the log files and purges any that are either old than the time set, or larger than the limit set.

I ran the command above and told it to keep the most recent 14 days worth of logs, which deleted 5.9GB of old log data on my system. I then added the command to crontab so it would maintain the logs at that level:

2 1 * * * root journalctl –vacuum-time=14d

This command runs the vacuum command each day at 2 minutes past 1AM.

My server now uses 2GB of storage, instead of 8GB. Not a bad result…

Interesting Failure Modes – SD Card Arching

August 22nd, 2016

I’ve been using Raspberry Pi computers for several years. A few months ago the Raspberry Pi B I had hooked up to my TV and running OpenElec, stopped working. The Pi was locked up and then wouldn’t reboot when I pulled the power cord.

It’s an original B model, with just two USB ports, so I didn’t think much of it. I prepped and put in place a replacement Pi, also running OpenElec, and threw the Pi into a drawer for further inspection..later. I got 4 years faithful service from a $35 computer so I wasn’t complaining.

Some weeks later, I had an idea that I wanted to use a Raspberry Pi for so I grabbed the questionable unit and got to work.

My standard way to prep a Pi is to use raspbian-ua-netinst (https://github.com/debian-pi/raspbian-ua-netinst/). It doesn’t require a huge download, and it doesn’t automatically install a GUI and a bunch of software I don’t want.

I knew that this Pi, and this same SD card, were from the failed system so I started by running some tests on the SD card before attempting the install. I didn’t have any trouble with the SD card, so I proceeded. Much to my surprise, the Pi booted the new SD card and completed its installation without any issue.

Bit of a head scratcher but it was working.

Since then the Raspberry Pi has been handling duties as a simple Stratum 2 NTP server (More on that in a later post…). It’s not seeing heavy load but it’s been working fine.

This morning, while logged into the Pi and monitoring NTP, the system just stopped being able to see the SD card. Since I was already logged into the system via ssh, my session stayed active but I couldn’t run any programs and my simple ntp watching command (watch -n 1 ntpq -p), started to return a blank screen. I quit the watch app, tried to ‘ls’ and ‘cd’ but the system just said command not found. I wasn’t even able to shut down normally, since the shutdown app wasn’t available either.

I pulled the SD from the Pi and plugged it into another SD card reader on a different system. Everything looked Ok and I was able to run fsck without any errors. I plugged the SD card back into the Pi and it didn’t even try to boot.

I went through a couple other little tasks, extra fsck runs, etc., and then, while holding the SD card, I noticed that the interface side was slightly arched. When place on a flat surface, contacts down, the card was raised in the middle. Obviously, this minor elevation change had stopped all of the contacts from making a good connection to the Pi.

I inserted the SD card back into the Pi, and while holding the center of the card down flat against the contacts, I applied power and was greeted with a normally booting system.

Interesting.

I pulled the card out and flexed it, just a little, a few times until it sat more flat and the Pi is now working again normally.

The Raspberry Pi is working great but the SD card has started to lose it shape after 4 years of near constant use. I’ve been toying with the idea of using an old 1U server case to rack mount a couple of Raspberry Pis and then using the SD card just for booting, using an external USB disk to host the OS. That idea is starting to make even more sense to me now. That being said, I suspect the SD card arching issue would be less likely to occur on all of the newer Pi models, which use MicroSD cards instead. They aren’t made from multiple pieces of plastic clipped together, like most SD cards.

Just an interesting failure mode that I thought ought to be shared…

Topslakr

How To: A Clean install of Windows 7 in 2016

April 25th, 2016

Maybe I’m alone here, but I still have occasion to install Windows 7 from time to time. For a while, that was no big deal; Install the OS, update, update, update and you’re good to go. Lately though, the process has become far more cumbersome. Not only does the process take ages to complete, but when you’re done its always nagging you to update to Windows 10. What follows is my process for getting this done without too much hassle.

Firstly, I’ve not created anything new here. I’ve simply found a series of tools and bit of information online that helped me along the way so I’m compiling it here for my future benefit.

Step 1 – Get the OS installed. After that, if your copy of Windows 7 doesn’t including Service Pack 1, go here and download it: https://www.microsoft.com/en-us/download/details.aspx?id=5842. Install. Reboot. Service packs include a large number of updates so it’s much faster to get the service pack installed first thing after the install than it is to use Windows Update to find it and download it.

Step 2 – There is a known issue with Windows 7 that causes the search for updates to take hours and sometimes days. You have the option of opening ‘Windows Update’ and telling it to check for updates manually but I find it far easier to tell the system to check for updates on its own and then leave the machine alone for 24 hours. It seems a bizarre step to take, and I often do waste a day waiting for the manual checking for updates to complete before seeing sense, rebooting and leaving it alone. In my experience, doing the manual check in a VM takes far longer than on physical hardware, regardless of the specs of the two machines.

Step 3 – Log in the next day and open Windows Update. It should have hundreds of updates ready for you to install. The next wrinkle I seem to run into is that clicking ‘Install Updates’ seems to sit there and think forever without actually doing anything. It just says ‘Preparing to install updates’. If this happens, use the Windows 7 shutdown option that includes installing updates. For whatever reason, this seems to work.

Step 4 – After the shutdown, power the computer back one and do a check for updates. At this point a manual check usually completes in a reasonable amount of time and you’re able to complete the updates as normal. It’s also around this time that update to Windows 10 nagging begins. Push forward though and install all of the available updates. The only thing I do at this point is ‘hide’ all of the language pack updates. Click on the top one and then shift click the bottom one to select them all. Right click any one of them and select ‘Hide Update’. They will grey out and not appear at your next check.

Step 5 – Time to clean up. Check again for updates and make sure you got them all. If not, install whatever is available. When you’re sure all updates have been installed, head over to this link (Sorry for the language…), and save the .bat file somewhere on your machine.

Be careful here. Running scripts from the web is pretty dangerous. Take a few minutes to read over it and make sure you understand what it’s doing. The script should basically be a series of wmic and wusa commands to remove updates, identified by their KB numbers. It will print out information about each update it removes as it goes along.

The batch file is a script someone else wrote that will uninstall all of the Windows 10 nagging updates, as well as the new telemetry ‘features’ they added to Windows 7, similar to the tracking features added in Windows 10. I didn’t write script and I don’t have any control over it…

Right click on the script and run it as an admin. Read what it says and then let it do it’s thing. It will pause when it’s complete. When it’s done, reboot.

Alternatively, if you don’t mind the new tracking features in Windows 7 and only want to get rid of the Windows 10 update nag, you can remove KB3035583, hide it from Windows Update and skip all of the script related procedures. Updates can be manually removed from the ‘Programs and Features’ section of Control Panel. Click ‘View Installed Updates’ on the left hand side, find the update that says ‘KB3035583’ and uninstall it. After it’s removed, reboot, check for updates and then hide the KB3035583 update in Windows Update.

Step 6 – Check for updates! Since we just uninstalled some Windows updates, there should be ‘new’ updates listed for installation. Obviously, we don’t want to install them so we’ll need to tell Windows to hide them. After the update search is complete, compare the updates listed for installation against the updates removed in the script.

Right click the script and choose ‘edit’. This will open it Notepad and you can compare the KB numbers. You’ll want to ‘hide’ any available updates that are listed in the script. Right click the update and select ‘Hide Update’. This will grey it out and it won’t show up when you search for updates next time.

Ideally, once you’re done hiding the updates correlating to the script, none should be left for installation. If some updates remain, get them installed and reboot.

Step 7 – Run the script one more time. This time, confirm that after each item it says ‘No Instance(s) Available’. This means it wasn’t able to remove the update, because it wasn’t installed. When each update shows that, you’re good to go. You won’t be nagged about Windows 10, you won’t have any of the new tracking ‘features’. You’ll (finally) be ready to set your machine up to do whatever task you needed.

Enjoy Windows 7!

Topslakr

Install Centos 6 on a Non-PAE Machine

March 12th, 2015

I have a Thecus N5200 that was modified to have a VGA port. Though the machine will run a variety of current Linux distributions, I wanted it to run Centos 6. Unfortunately, the N5200 doesn’t support PAE, which Centos 6 requires.

The first major problem is that a Non-PAE machine won’t even boot the Centos installer CD/DVD. You have to find some way around that. There are several ways to get around that but they are all quite complex and time-consuming. Plus, as time goes on they work less and less. The old software needed is harder and harder to find. I instead chose to simply install Stella. It’s a Desktop focused Linux distribution that is based on Centos 6 and the 32bit version includes a Non-PAE kernel. If you’re looking to install a Centos 6 desktop, install Stella and you’re all set. It’s great. The developer did a great job keeping things compatible with Centos 6 while also adding in things like video codecs and the like.

For my little project with the N5200 though I wanted to go a little further. I wanted to install Stella, to get around the NON-PAE requirement, and then convert it back to a standard Centos 6 installation… or close to it.

I booted the Stella DVD and installed it to disk. Once I booted into it from my hard drive I set about searching for and removing packages not part of the normal Centos install.

The first thing I did was search for installed packages from repos outside of a normal Centos installation. I used this command initially

yum list installed

to get a look at everything installed and where it was installed from. From there I started to use grep as well to filter things down.

yum list installed|grep dex

Many of the packages were from a repo that included the word ‘dextop’ so I filtered the list to show me those by piping the output to ‘grep dex’.

With a list packages, I use the ‘yum remove’ command to remove them:

yum remove a52dec amrwb b43-firmware bluecurve-metacity-theme faac ffmpeg-libs gnome-web-photo goocanvas faad2-libs gstreamer-plugins-bad gtkimageview lame-libs libdca libdvdcss libfribidi libmad libmimic libmms libmpeg2 libsidplay mjpegtools-libs nux-dextop-release opencore-amr opencore-amr perl-Gnome2-Canvas perl-Gnome2-VFS perl-Gnome2-Wnck perl-Gtk2-Unique perl-Proc-Simple perl-X11-Protocol-Other twolame-libs unrar flash-plugin xulrunner19 xvidcore

(If you copy that command, be sure to remove any line breaks you browser may have put in.)

From there I used the yum list command again to search for ‘stella’. It turned up a package called ‘stella-release’. The _____-release package is used to tell the system what the version and name of the OS is. If you remove it, as I did, it will take with it a lot of packages.

yum remove stella-release

Now.. don’t reboot!

The next step is to install a new _____-release package. In this case, I needed to install the Centos 6.6 release package, which I was able to find in the Centos mirrors. Just find the package and use yum to install it. The file location/name may change slightly overtime but I used this command:

yum install http://mirror.centos.org/centos-6/6.6/os/i386/Packages/centos-release-6-6.el6.centos.12.2.i686.rpm

If yum complains and won’t run, download the packaged and install using the rpm command:

wget http://mirror.centos.org/centos-6/6.6/os/i386/Packages/centos-release-6-6.el6.centos.12.2.i686.rpm && rpm -i centos-release-6-6.el6.centos.12.2.i686.rpm

I then also installed the epel package, which I almost always use with Centos. (Again, the link could be slightly changed for you)

yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

From here, you need to start rebuilding the OS. Some of the work we’ve done will have broken the system so before a reboot we need to replace many of those packages with official Centos packages or, in most cases, the same packages we just removed but without the stella-release package requirement.

I wanted to make sure I kept the Gnome 2 GUI so I used the yum groupinstall command to rebuild most of those packages.

yum groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts" "Internet Browser"

From here on the system is basically a clean Centos 6 install but with Stella branding and their non-PAE kernel. The kernel I want to keep but if possible, I’d like to remove the branding. This one can be a bit messy. If you try to pull branding you basically uninstall the whole OS.

You have to remove the branding information without checking for dependence, as yum would, and then replace those stella packages with the regular Centos ones.

Firstly, remove the stella branding:

rpm -e --nodeps stella-logos && rpm -e --nodeps stella-bookmarks && rpm -e --nodeps stella-indexhtml

Next, add back in the Centos versions of those packages:

yum install redhat-logos centos-indexhtml redhat-bookmarks

Yum will complain that the rpmdb has been edited and there are missing packages, etc.. But, we know that already and running that command will fix the problems.

Following that, it’s probably a good idea to run a ‘yum check’ command to be sure everything is resolved. It will take a look at your yumdb and let you know if it sees any problems with unresolved dependencies, etc.. It took forever to run on my N5200 but it all came back clear.

If, like me, you’ve gone this route to get around the PAE issue in Centos you’re going to want to update your yum.conf file to tell the system not to install any kernel updates. Any kernel you get from Centos will not work on your machine.

vi /etc/yum.conf

Just find a spot under the [main] header and add the line:

exclude=kernel*

This will just exclude kernel updates. It’s not ideal for security but, at least for me, it’s worth it to have a working Centos 6 system.

Now, we’re very close. Openssh-server has been removed and I wanted it back so I reinstalled it and set it to run on boot.

yum install openssh-server && chkconfig --add sshd

Next, if you want to boot into the GUI, instead of to the terminal, the inittab file needs to be updated.

vi /etc/inittab

The final line says ‘id:3:initdefault:’ but to boot to a gui it needs to be ‘id:5:initdefault:’ instead. Update that 3 to a 5 and save the file.

The very final detail would be to look around for unneeded but still installed packages. You can do a quick search for unneeded libraries:

package-cleanup --leaves

And then uninstall those rpms, which serve no purpose.

I’m sure there are other remnants of Stella lurking around the system but these steps cleaned up my install and brought it back to a pretty good state. I’m happy with it and it’s been very stable.

Sure, I could have just kept the normal Stella install but where’s the fun in that?

Thecus N5200 as a Server

March 12th, 2015

Back in 2007, I bought a Thecus N5200 to use as bulk storage on my network. I’ve spent many years using and fighting with the unit but it wasn’t until my storage needs out grew the 5 SATA disks it could hold that I considering sending it off to be recycled.

It’s a pretty basic unit. From Thecus it arrived with 5 hot swap SATA disk trays and a simple web-based management interface. The interface wasn’t great at telling you exactly what the state of your disks were so I lost my data with this unit more than once. Over time though Thecus added a method to install add ons and little by little people started to write add ons for the unit and actually increased its feature set. You could get an add-on to allow you to access the unit over SSH and to set up rsync file copies to a second location. Overtime, it became very useful and it’s remained in almost constant use.

Recently though, I needed 6 disks to hold my data and it was time to retire the N5200. Just on a whim I did a quick google search to see if anyone had any success ‘hacking’ the unit and doing something neat with it. Several people had soldered a VGA connector to the motherboard. The back of the unit is basically a metal plate with covered up ports for things like PS2 keyboard and mouse ports and a VGA port. As it happens, the motherboard inside the unit has all of the components to use these devices, except the actual connector. I harvested a VGA connector from an old dead graphics card and soldered it to the motherboard of the N5200. (Well, truth be told, I had a fried do the soldering. He was working on a pile of other soldering that day and it was just easier than getting my stuff setup…)

Much to my amazement, when I connected a display to the unit I was immediately greeted with the BIOS boot up process. From here though, things got a bit more complicated.

The graphics chip on the system is, as you would expect, pretty basic. Beyond that though, the unit supports a maximum of 1GB of memory and has an Intel Celeron M processor running at 600Mhz. This is not a speed machine. It’s well equipped to to handle the tasks it was designed for but will struggle to be much use as a desktop.

That being said, with the right software and the right tasks, the unit has life in it. Full screen video and flash is out, but it’s a fine email/web surfing machine and the right display manager works quite well. I’ve had very good success with both Gnome 2 and Mate.

Installing an OS is a bit tricky. The unit will boot from a USB device (things like a thumb drive, or a CD-ROM) and it will also boot from its internal IDE interface. It uses a mini-IDE port and it’s a bit hard to get to. It’s workable but getting the right length of cable and finding a place to store the disk is tough.

Instead, I use a combination of thumb drive and a SATA disk. When installing linux I format and use a thumb drive for the /boot partition and the boot loader but I use one of the 5 SATA trays for the root partition, swap, and anything else you might want. It works well for me but can be a little complex to set up.

I’ve had good success installing current versions of Fedora (to date, up to version 21 works fine) as well as Debian. The only limitation this machine has in terms of what Linux distro you install is the processor’s lack of PAE support. PAE, or Physical Address Extension, is a way for 32bit processors to address RAM beyond 4GB (It’s more complicated than that…). When this machine was built though, there was no reason for Intel to add PAE support to a low power processor designed for mobile devices where speed was not top priority.

My goal for this machine is to install Centos 6, which I think is ideally suited to its hardware and use case. Sadly, Centos 6 requires PAE support and workaround needed to be devised. I’ll post about that in the coming days/weeks. Here’s that post: Install Centos 6 on a Non-PAE Machine

So, how do I use this slow and limited machine? With 4 free hot swap SATA trays, I use it for quite a lot of disk testing and verification. Linux has great tools for both checking a disk’s health, and when it’s time to be recycled, wiping the disks securely. Running 4 disks as once is well within what the unit can handle.

Further, the machine has two working NICs and I often us it as a go between when I’m testing and setting up routers or switches. I can easily connect the device to the N5200 and do whatever tasks I need to without having to take something offline, etc.. It’s also very useful when you need put something online while still keeping it separated from your main network. I can sit in between the problem device and the internet watching the data go by and keeping an eye on things.

Add to that mix light email work, or RSS feeds and it’s a pretty reliable unit to handle tasks you need to offload from another machine for whatever reason.

The Thecus N5200 is small, quiet and more useful the older it gets… or so it would seem.

Topslakr

Portable Pi – Raspberry Pi with a Motorola Lapdock 100

December 17th, 2014

Having read countless accounts online from people who have taken various lapdocks and paired them with a Raspberry Pi, I thought I would give it a try myself.

I went to Ebay and purchased a Motorola Lapdock 100. It’s a small unit with a cable that comes out that back. This cable has a header with a Micro-USB and Micro-HDMI connector on it. General wisdom seems to be buying series of adapters and cables and then sort of whittling them down until they fit. I dutifully bought the necessary bits but when the Lapdock arrived it seemed much easier to just take apart that cable header instead.

What I discovered is that inside that header is simply the two cables held in place with some soft plastic. Having removed the plastic and cutting back the cable, I have two loose cables that I can use freely.

Lapdock Cable Crop

With the addition of two adapters to bring the two micro connectors up to full size, which the Raspberry Pi needs, I was off and running.

Lapdock with Adapters

With that complete, the Lapdock becomes pretty useful. You can use it not just for the Raspberry Pi, but as a keyboard and display unit for any computer, provided you have a couple adapters.

The next hurdle is getting the Lapdock to turn on and off when you want it to. It’s pretty picky about how it works. The first thing to know is that once the screen turns on, you have to allow it time before it will turn on again. For instance, if you power it up and the screen turns on and then back off, it won’t turn on again until you’ve waited a couple of minutes. Just be patient.

I have a little system that works for me to reliably power it on but it’s a bit more complex than would be ideal.

Firstly, I power on the Pi via external power. That is to say, I use a USB port or wall adapter to power on the Pi. Once the Pi is on and I can see disk activity, I then plug in the HDMI cable for the lapdock and the USB cable. Every time I power it up this way, it works.

If I power on the Pi via the USB port on the Lapdock, the screen never shows anything and it just turns back off.

The good news is though, once the Lapdock is showing the Pi on it’s screen you can disconnect the Micro-USB power cable. The Pi will continue to get power via the USB cable from the lapdock, even though it’s not plugged into the ‘power’ USB jack.

So:

Step 0: Unplug everything from the Lapdock, including the power adapter and wait for at least a minute.

Step 1: Power on the Pi using a micro-USB cable into another computer or a wall adapter and count to 5.

Step 2: Plug in the HDMI cable from the lapdock and the USB cable. You don’t need to rush but you should plug in the USB cable within about 10 seconds of the HDMI cable, or the screen will turn off and you have to wait again.

Step 3: Confirm that the Raspberry Pi is displaying on the screen of the lapdock and then disconnect the external power from the Pi.

Lapdock - No External Power

For me, this works every time without issue but there is a little more to it.

As you may know, the Rapsberry Pi checks for an HDMI connection when it turns on. If one isn’t detected it will disable the HDMI port. Since we are powering on the Pi without the HDMI connection, we have to force the Pi to use that connection even though nothing is detected. To that end, a file called ‘config.txt’ needs to be edited.

When you are setting up your Pi from your Mac or PC, you have to copy an image to an SD or MicroSD card. As a part of that process a couple of partitions are made on the card and files copied into them. Do some Googling if curious, others do a much better explanation than I.

One of those created partitions is compatible with Windows and Mac machines and will appear as writable when you insert the memory card into your computer. It’s usually called boot, BOOT, NO NAME, etc. If you see a config.txt file in there you can add these lines to it, or you can simply create the file yourself. Be careful though, it has to be called config.txt and both Windows and Mac computers will try to add other extensions, which they will likely hide from you. I would double check the file name from the command prompt or terminal. (Or from the Pi, assuming you have a different monitor you can hook up to it, it’s located in /boot)


#This line forces the Pi to enable the HDMI connection even if nothing is detected.
hdmi_force_hotplug=1

#These lines tell the Pi about the display on the Lapdock so that the display looks as good as it can.
hdmi_group=2
hdmi_mode=0x51

#These last two lines are probably optional but I added them when testing.
# Should enable sound via the HDMI connection
hdmi_drive=2
#Boosts the signal on the HDMI cable
config_hdmi_boost=4

For me, the config.txt file get’s erased from time to time. When I do a big OS update or a firmware update on the Pi the file will have all of my settings removed.. which causes my Lapdock to not work properly. Without those lines the Pi won’t enable the HDMI port on boot and the Lapdock just shows a blank screen for a few seconds and goes off.

If that happens, just pop the memory card back into your computer and make sure those lines are present. Put the card back into the Pi and get back to happy computing.

Happy Computing!

Topslakr

Installing X-Plane 10 on Fedora 20

July 24th, 2014

I’m a bit of an aviation buff and have been known to spend a fair few hours playing flight sims. My favorite sim these days is X-Plane, and thankfully it runs on Windows, Mac and Linux. My current host for X-Plane is Fedora 20 and since there is a bit of extra effort to make it run properly, I thought I’d outline my process here.

First and foremost – This post assumes you have installed the appropriate graphics drivers and are not running whatever drivers were just included ‘in the box’, as it were. My system has an NVidia GeForce GT 640 and I use the akmod-nvidia drvivers from RPM Fusion, which works great. A quick Google search should turn up an appropriate process for your computer.

Secondly, I am running the 64bit version of Fedora 20 which brings with it some headaches since the installer for X-Plane is 32bit, though they offer game executables in both 32bit and 64bit. Without several 32bit packages installed on the 64bit system, X-Plane is unable to run and complains about graphics issues. By default, Fedora will install the software and drivers for your graphics card in whatever your default architecture is, which for me is 64bit. Since the installer is 32bit, I need to add several 32bit graphics packages so that the 32bit installer can talk to my 64bit hardware.

Confused yet? Don’t worry about. Follow along and you’ll be all set.

The X-Plane 10 release DVDs I have were the very first batch after the game’s release and there is a known problem with the installer. You can grab an updated copy from the X-Plane website or use their free demo.

The first thing to do with the installer is open up a terminal and run it from there but before it will run you’ll need to install some 32bit libraries so the system can execute the program.

yum install mesa-libGL.i686 glibc-2.18.i686 mesa-dri-drivers.i686 xorg-x11-drv-nvidia-libs.i686 sqlite.i686 libXinerama-1.1.3-2.fc20.i686 openal-soft.i686-y

This is, for me, all of the packages I need to install. The system will grab other packages to satisfy the dependencies of those packages, which is fine. Once they’re installed, it still won’t work and complains that it can’t find ‘libGLU.so.1’. For whatever reason, installing ‘mesa-libGL.i686’ installs a file called ‘libGL.so.1’ and X-Plane wants that extra ‘U’ in the file name so a simple symlink will take care of it.

ln -s /usr/lib/libGL.so.1 /usr/lib/libGLU.so.1

It’s a simple fix that takes care of the issue.. not an easy one to figure out since it’s only a one character difference in the name. Those sorts of issues tend to make me a little cross-eyed before I solve them.

This is one further similar issue that turns up when you try to run the installed application. The X-Plane program has slightly different needs than the installer. For the 64bit application to run you’ll need to create this symlink as well:

ln -s /usr/lib64/libudev.so.1 /usr/lib64/libudev.so.0

With that done you should be able to just run the executable. If you still have that command prompt open though, which is how I run it, you can check to see if you’ve satisfied all of the installers dependencies with the ldd command. My executable is called ‘X-Plane 10 Installer Linux’ so I run:

ldd ./X-Plane 10 Installer Linux in the terminal from the folder containing the installer which gives me a list of needed files and their locations. If you see any lines that say ‘Not Found’ you can use ‘whatprovides’ to find them:

yum whatprovides */libX11-xcb.so.1 for instance, and yum will output the package name you need to install to satisfy the requirement. Keep an eye out for i686 versus x64_86 packages. On a 64bit system you have to specify .i686 at the end of the package so that it will install the non-default 32bit version, as we did in our first ‘yum install’ command.

With all of that done, just run the command to launch the program and you’ll be all set. For me:

./X-Plane 10 Installer Linux and the ‘X-System’ installer comes up as expected. Make sure you have you DVD in the drive, or mount the ISO from a network share like I do, and you’re off and running.

Once installed, you can choose to run either X-Plane-i386 or X-Plane-x86_64. Both will work though the x86_64 version can take advantage of more than 4GB of RAM, a limitation of i386.

Happy Flying!

« Prev - Next »