Centos 7 – NextCloud-Client Installation Issues (September 2019)

September 16th, 2019

Quick post, which will hopefully be helpful to someone..

For the past couple of weeks I’ve been unable to update one of my Centos 7 servers. I chased it down this AM to the Nextcloud-Client software I use on that machine.

Long story short, there is a qt update in EPEL that isn’t compatible with Centos 7.6.1810. Upstream (RedHat) has already released RHEL 7.7, but Centos has not yet caught up. EPEL is tracking upstream, and this qt update will work once Centos 7.7 is released.

In the meantime, I stumbled across this page on CentosFAQ.org: Nextcloud-client Currently Not Installable From EPEL In CentOS7

If you already have the Nextcloud-client installed, simply exclude the qt package from updates by adding ‘exclude=qt5-qtwebkit’ as a new line to your /etc/yum.conf file. If you already have an exclusion line, just put a comma after the last package you’re excluding, and add qt5-webkit.

If you don’t have the client already installed, you will have to grab the ‘archive’ package linked in the above article.

Now that I have the updated exclusion in my yum.conf, running updates completes without issue.

Just remember to remove the exclusion when Centos 7.7 is released!

Topslakr

Switching from Nikon to Olympus: Part 1 – Isn’t Micro Four Thirds a Toy format?

February 8th, 2019

Though I’ve not been a very regular blogger, I have done a decent job of documenting my various photography gear related changes here since my first DSLR, a Nikon D40, back in 2006. From that first DSLR, through my foray into 35mm film cameras, medium format, and the various other DSLRs I moved to along the way, those milestones have been written about here. And, with the exception of the medium format gear, all of that equipment has always been from Nikon.

Perhaps to my detriment, I’ve always been 100% loyal to Nikon. SLRs, DSLRs, lenses and accessories, I always bought Nikon gear. To a certain extent this stems from the idea that it makes sense to shoot the same brand of camera as your group of friends, which served me well; It was great to be able to share around lenses/gear as needed.

On the flip side though, Nikon has never made APS-C lenses a priority and as they fall further behind with, to their mind, their more important full frame equipment, their DX lenses have really suffered. When Nikon released their first DSLRs, like the D1 and D2 series of cameras, they did make some inroads with professional quality DX lenses. Since those initial releases though, they have not updated or refreshed those lenses and major features are lacking like VR and newer glass coatings. As I was rebuilding my gear a couple years ago around the Nikon D500, their high-end sports and wildlife camera, I had zero on brand options for higher end glass. Since the reason I bought the D500 was to minimize weight, without sacrificing camera features, buying their full frame lenses wasn’t an option. Their weight, size, and price are just ridiculous. Have you seen some of the glass they announced for their mirror-less Z Series cameras? Sure, the camera is a bit smaller, but the lenses are bigger!

Enough is enough. I want a smaller & lighter setup, access to stunning lenses, and a company committed to a system with the innovation to show for it. The hunt was on!

I did a quick pass of all the camera brands I could think of. Fuji, Canon, Sony, Pentax, Panasonic, Olympus, etc., and did a quick scan of their current range of gear to see what I could find. Fuji has a developing business in APS-C sensor cameras, but lenses were still very large, and expensive. Canon has a LOT of product lines. They do offer a smaller line of mirror-less cameras but they have not made it a priority to release high quality glass. After some deeper looking, it felt to me like Nikon’s approach to APS-C, a second class citizen. Sony is a very popular brand these days, but I’m not willing to purchase, and lug around, huge full frame lenses. No viable options worth the hassle of changing systems.

At this point, I had more or less decided that where I was with the gear I already owned was where I had to be. There was no better options. The only other option I had was micro four thirds, and no one serious shoots that toy format.

All of this was happening toward the end of 2018 and as family events approached, and evenings spent walking around some local cities, I found myself deciding not to take my D500 and a 35mm prime lens with me. Too much weight. Too much hassle. The resulting pictures were almost never worth the trouble of carrying around even just the D500 with a 35mm prime, let alone my full kit. I could have shot more, and better, images on my iPhone.

Ok. It’s time for drastic change. My gear is officially holding me back.

After years of being in the camp of people who looked down on micro four thirds, the attractiveness of a smaller and lighter kit of gear was enough of a draw now for me to give the system a little more consideration. The knee jerk reaction for most people, myself included, was that the tiny sensor was useless at higher ISOs and not worth considering. I did some searching around on Flickr for higher ISO images and nothing I saw was alarming, so I kept digging. While my Nikon D500 could shoot at ISO 1.6 million, even by ISO 6400 the images were pretty noisy. The images I found on Flickr for the OM-D EM-1 Mark II certainly didn’t look any worse.

Next was to look into the lens options, and how they compared to Nikon’s lenses. For weight reasons, my current Nikon lens line up included some primes, but was mostly amateur zoom lenses. Image quality was fine, but AF speed is slower then with pro lenses and they aren’t built to the same standard as pro gear or weather sealed. After some web searches I came across FourThirds.org, which has a pretty complete list of available micro four thirds lenses.

I started looking around on pricing for the same quality of lens as I had with Nikon and was a bit confused by the price. They were very very inexpensive. I did a quick look around for lens reviews and found results inline with the like lenses I had on Nikon. Not perfect lenses, definitely had their faults, but workable.

Ok, so how about the higher end Olympus Pro lenses then?

This is where things got interesting. For, relatively, short money I could seriously upgrade my quality of lens without having to sacrifice by taking on a lot of weight. At this point, my thinking is that I can protect myself a bit from the noise of the camera sensor by shooting with wider apertures. Trading my F/3.5-5.6 zooms for constant aperture F/2.8 zooms would make a big impact on keeping my ISOs lower when shooting.

Ok, so I’m working my way toward starting to maybe consider micro four thirds as my next camera system.

The next hurdle, for me, is AF. My Nikon D500 was very sure-footed in terms of auto focus. My lenses weren’t the fastest at focusing, but it never hunted around and it locked on and held focus without issue. Always. I won’t get down into the weeds too much here, but DSLRs use a dedicated system for finding focus, called phase detection, and mirror-less cameras, generally speaking, use the image sensor to find focus with a system called contrast detection. DSLRs that have live view and video modes use this contrast detection system for focus in those modes. Both systems have their positive aspects, and negative aspects. (You can click through here to a site that breaks this down in detail.)

For tracking focus of moving objects though, phase detection is important because using that system the camera knows if it needs to focus closer or further away, and by how much. Then it very quickly snaps the lens to that focus distance. With contrast detection the camera only knows it’s not in focus and has to hunt in and out to figure out how to get in focus. If a person is running toward, or away from you, phase detect focus is what you need. It’s an area of very active development for the mirror-less vendors, but only Olympus offers it in micro four thirds, but using some special, extra, sensors embeded into their image sensor. So, that narrows down my camera choices. The other micro four thirds vendor, Panasonic, doesn’t have this feature and has announced they are not pursuing it.

Ok, so I’ve identified a camera manufacturer I need to look more deeply into. And, since Olympus and Panasonic are working in tandem on the micro four thirds format, two major manufactures of lenses, as well as a myriad of other third party makers.

The next step in this journey is to get my hands on some of this gear and see for myself what it can, or can’t do. The only way to know for sure if this kind of change is going to work is to rent the gear, and use it like I normally use my gear.

We’ll dig into that in Part 2 of this series, and thankfully, that’ll even include some images instead of just a massive wall of boring text!

-Topslakr

Books Read: 2018

December 31st, 2018

Using my (now aged) Kindle, I do a fair bit of reading. For some reason, whenever I finish a book I put it into a folder on my Kindle named for the current year. These folders exist only on the Kindle itself, so I thought I might start to keep track of them here on the blog.

At the end of 2017 I was reading a lot of memoirs of people who moved into the wilderness, both in recent years and in centuries past. That continued into 2018 and the first book I read was:

Winds of Skilak, by Bonnie Ward
This was an excellent book, written by Bonnie, about her and her husband’s journey leaving Ohio and moving to an isolated island on Skilak Lake in Alaska. Her section on driving their jeep across the melting lake was a real nail biter!

Next, I read a book I received as a gift, Chernobyl 01:23:40: The Incredible True Story of the World’s Worst Nuclear Disaster. I’ve always been facinated by the story of the nuclear disaster but I had never read too deeply into it. I really enjoyed this book, which feels more honest and unbiased than much of the reading I had done to date. It was a fast read and did a nice job explaining what happened, both on a human and scientific level, without loosing me in the finer points of nuclear power generation.

After that it was back to a number of few more ‘moving to the wilderness’ memoirs.

First was Our Life Off the Grid: An Urban Couple Goes Feral. This was a great, very pragmatic, story about a Canadian couple that left the city to live on island in coastal western Canada. This couple chooses a harder life than most, but they had far more neighbors than many of the stories I had read. It was very well written and far less stoic than some of the others I read that were written by men.

Then I read Arctic Homestead: The True Story of One Family’s Survival and Courage in the Alaskan Wilds. This one was enjoyable, though it contained a couple of sections that I found completely unbelievable. This was written by the wife and mother of the family and much of it read like she was trying to keep life happening as usual while her husband spent his time making rash or short sighted decisions that had consequences for the family down the road. Without reading both sides of the story its hard to be sure just where the truth lies…

Next I read a book about the Appalachian trail called A Walk in the Woods: Rediscovering America on the Appalachian Trail. I enjoyed the book, and the more diary-ed approach to his writing. It was a journey in multiple parts and it was great to hear about the situations and people. Many of the themes he touched on resurfaced in other books I read on the topic.

I then wanted to do a bit more reading about other trails here in my home state, New Hampshire. I decided to read Not Without Peril, Tenth Anniversary Edition: 150 Years of Misadventure on the Presidential Range of New Hampshire. To be frank, I did not enjoy this one. For someone who isn’t a hiker, or needs a graphic warning about the dangers of being unprepared and uninformed, it would be an important read. But, for me, it simply read like an unending string of people who made bad choices reaching the end of their life on the side of a mountain, or coming very close.

I decided to then read another story of someone walking the Appalachian trail, called Becoming Odyssa : Adventures on the Appalachian Trail. This was excellent. It’s written similarly to the previous book on the trail, but from the point of view of a young woman who chose to hike it more or less on her own. It was great to read about her journey and perseverance.

Next, it was time for a change of pace and I read Skunk Works: A Personal Memoir of My Years of Lockheed, which was an excellent book about the history of Lookheed Martin’s Skunk Works division but someone who was there during it’s heyday. Since I was very young I used to love the aircraft they designed, especially the SR-71, and I really enjoyed hearing about those years at the company.

Next up was quite a short book about life atop Mount Washington written by a scientist living in the observatory for a year called Among the Clouds: Work, Wit & Wild Weather at the Mount Washington Observatory I read this very quickly and wished it had been much, much longer!

Having finished that book, touching on weather, I then read a book about a hurricane that struck Texas in 1900 called Isaac’s Storm: A Man, a Time, and the Deadliest Hurricane in History. This book was truly fascinating. It goes into meteorology at the turn of the century, and how that data was used (or not…). Beyond the actual story of the storm and those involved, it was incredible to see how people were interacting with each other. Husbands, in suicidal hubris, telling their wives to stop worrying about nothing and get back to baking, when in fact they should have fled the area. Impactful for sure.

After the previous book, I needed something a bit lighter and I dug into Agatha Christie’s The Mysterious Affair at Styles. Apparently, this was her first novel and introduced to the world Hercule Poirot. I’ve long loved reading murder mysteries and this was the first of many.

First though, I read a book given to me as a gift: To the Edges of the Earth: 1909, the Race for the Three Poles, and the Climax of the Age of Exploration. This was a real slog. It was given to me in hardcover, which is Ok, though I do prefer to read on the Kindle. The book is actually pretty good, but the writing style differed greatly from my preference. I am comically bad at recalling which character is which in books (and TV shows, Movies…) and this book is written interweaving three stories about three groups trying to summit Mount Everest and reach the two poles. Had it been written in three parts, I’d have loved it but it kept switching from group to group and I could never tell who was were. Very interesting material, and I’m glad I read it… but it was slow going for me. I was reading a book every 10 days or so, but this one took me months!

After that, it was surely time to enjoy reading again! We were going on a bit of break and I wanted to fill up my Kindle with a few, light, stories to enjoy while away. I did a search for Agatha Christie’s book and snagged Dead Man’s Mirror: A Hercule Poirot Story, The Affair at the Bungalow: A Miss Marple Story, Problem at Sea: A Hercule Poirot Story, and The Witness for the Prosecution. I basically sorted by best reviews, filtered for $0.99 books and bought the top 6. I enjoyed them all but there was an interloper.

The final book of the bunch, and no doubt an advertisement Amazon slipped past me, was a book by Faith Martin called MURDER ON THE OXFORD CANAL a gripping crime mystery full of twists. I really enjoyed this book, which was a bit meatier than the Christie’s, and I then went nuts and read 11 more in the series. They all follow a DS in Oxford, England as she solved murders. They are light reading and a bit predictable in form, but I enjoyed them. The actual murder investigation is always interesting and there is a good bit of procedural detail in them. Additionally, she weaves in extra story lines with the book’s primary characters to create a larger story arc that takes place across three or four books. Each book had one such story line coming to a close, while others were percolating away. It was these extra details that kept me reaching for the next book. I read the following:

1. A Narrow Escape (2004)
aka Murder on the Oxford Canal
2. On the Straight and Narrow (2005)
aka Murder at the University
3. Narrow Is the Way (2006)
aka Murder of the Bride
4. By a Narrow Majority (2006)
aka Murder in the Village
5. Through a Narrow Door (2007)
aka Murder in the Family
6. With a Narrow Blade (2007)
aka Murder at Home
7. Beside a Narrow Stream (2008)
aka Murder in the Meadow
8. Down a Narrow Path (2008)
aka Murder in a Mansion
9. Across the Narrow Blue Line (2009)
aka Murder in the Garden
10. A Narrow Point of View (2010)
aka Murder by Fire
11. A Narrow Exit (2011)
aka Murder at Work
12. A Narrow Return (2012)
aka Murder Never Retires

There are a few more yet in the series, which I may get to next year, but I’ve put them to one side for the moment and have started reading some more varied murder mysteries.

I read a pretty good mix of fiction and non-fiction this past year, which was a surprise to me. Until the second half of 2017 I’m not sure I’ve ever sat down and read a non-fiction book for pleasure before. We shall see what 2019 holds. If I have the time, I’ll try to post monthly about my reading through the year. That being said, if I end up bingeing on one author’s collected murder mysteries perhaps a more broad digest is best…

Setting Up Nagios – Working with Cisco’s CIMC

September 7th, 2017

This was far more challenging than it needed to be. Cisco makes some SDK’s available for use with Nagios but I was totally unable to make the system see them. They are just Python packages so I didn’t expect much trouble but I was totally at a loss. I thought, initially, that the issue was that the installer dumped the files into /usr/lib/python/site-packages/ instead of the 64bit path /usr/lib64/python/site-packages/ but no amount of copying and permissions changes made the system able to see the dependencies. For those interested, here is a link to the Cisco Nagios tools which is working in September of 2017 Nagios Plug-Ins for Cisco UCS I am making use of none of that package, or it’s requisite dependencies.

Instead, I found a script written in Go, that worked a treat and required no other dependencies to work. You can find it here on Github: check_cisco_ucs

Having never used anything written in Go before so I had a few things to learn but it turned out very simple. I downloaded the ‘check_cisco_ucs.go’ script to my Nagios server. I tried to run it and installed a few bits to that end before I understood I needed to compile the script into an executable. That turned out to be exceedingly simple.

First, I installed the ‘golang-bin’ package, which is in EPEL.

yum install golang-bin

I then navigated to the folder containing the check_cisco_ucs.go script and ran the following command:

go build check_cisco_ucs.go

With that, I was able to run the script. The Github page has a series of example commands which I started firing at some of the Cicso C220 M4 servers I needed to monitor. The script was last updated in 2014 and while much of it worked, not all of the items the author used as examples remain working in the updated BIOS on my systems.

This one worked fine, once you update the IP address and user/password as required. It takes 10-15 seconds to run and then reported back accurate information about the RAID setup and status of the system. The only caveat being, since we run CIMC through https, I needed to add the ‘-M 1.2’ tag so that it would accept TLS 1.2, which CIMC was running.
./check_cisco_ucs -H 10.18.4.7 -M 1.2 -t class -q storageVirtualDrive -a "raidLevel vdStatus health" -e Optimal -u admin -p pls_change

With this I knew the solution would work and I started to go through the steps to make the required checks appear in Nagios. I moved the check_cisco_ucs file into the folder Nagios expects to find the command files, which for me on Centos 7 is /usr/lib64/nagios/plugins.

I then created a new file, called check_cisco_ucs.cfg, and put it into the folder I have configured my Nagios install to look for commands. Within that file I’ve listed my commands, which are a bit messy:

define command {
command_name check_cisco_ucs_storage
command_line $USER1$/check_cisco_ucs -H $HOSTADDRESS$ -M 1.2 -t class -q storageVirtualDrive -a "raidLevel vdStatus health" -e Optimal -u admin -p yourpass
}

I then added the relevant check to the server object I wanted to check:

define service{
use generic-service
host_name HARVM1-cimc
service_description RAID Controller Status
check_command check_cisco_ucs_storage
}

Do a quick check of your Nagios config to make sure it’s sane and working:

nagios -v /path/to/nagios.cfg

I was able to do the same for the next example check on the GitHub page, for information about local disks.

From here though, the checks I wanted didn’t work as expected. Reading light status was problematic and the power supply command needed a small tweak.

The author suggested a powersupply command as:
./check_cisco_ucs -H 10.18.64.10 -t class -q equipmentPsu -a "id model operState serial" -e operable -u admin -p pls_change

But, in the 3.0 version of CIMC I’m running, the ‘operState’ option is no longer present and instead I needed to use ‘operability’. My Nagios command is this tweaked version:
$USER1$/check_cisco_ucs -H $HOSTADDRESS$ -M 1.2 -u admin -p yourpass -t class -q equipmentPsu -a "id model operability serial" -e operable

The final check I wanted was to let me know if any of the status lights were amber, instead of green. The author suggested a check to watch one LED to ensure it was green but this reported no information. I instead rewrote the command to look at all of the lights and tell me if any are amber, instead of confirming one light was green.

$USER1$/check_cisco_ucs -H $HOSTADDRESS$ -M 1.2 -u admin -p password -t class -q equipmentIndicatorLed -a “id color name operState” -z -e amber

This command looks at the lights, which gives me this output:
1,green,LED_PSU_STATUS,on
2,green,LED_TEMP_STATUS,on
3,green,LED_FAN_STATUS,on
4,green,LED_HLTH_STATUS,on
5,blue,FP_ID_LED,off
0,green,OVERALL_DIMM_STATUS,on (0 of 6 ok)

The key here, is that the -z means that if the state is NOT found, i.e. there is no amber light, the check is considered Ok. If there is an amber light, the check will fail and alert me. Perfect.

The final hurdle between finding the plugin, and making my checks work was a couple hours effort with the Cisco UCS Rack-Mount Servers Cisco IMC XML API Programmer’s Guide. Using that guide, and some curl commands I was able to finally get the data out I wanted and transform it into commands that plugin could use.

First step was to authenticate to the CIMC:
curl -d "" https://Server IP/nuova -k

That final ‘-k’ is required to make curl ignore the SSL cert on the system, which is just a basic self-signed.

That command outputs a cookie, which you then pass in future commands to remain authenticated.

curl -d "" https://10.18.80.101/nuova -k

Be careful not to mix ‘ and “. The ” is used to escape the command being sent via curl, the ‘ is used to bind together the variables in that command. Mixing them will cause the commands to fail.

That should be enough to get you going to find any other commands you might be interested in.. I’ll try to fill in some more detail once I’m finished rolling out the Nagios install.

Setting Up Nagios – Installing on Centos 7

September 6th, 2017

Long story short, I changed jobs about 6 months ago and found myself in a new position without any appreciable monitoring of the hardware for which I’m responsible. That needed to change as soon as I started to have time to put some hours into it. I did a little testing and poking around with some other monitoring tools but kept coming back to Nagios as the right choice, despite the mountain of work involved in setting it up. There is such a large community of help and plugins available for it, plus it’s a tool I’ve used at other jobs for many years.

The actual process of standing up a Nagios server is incredibly simple, but that initial server does almost nothing. Let’s go over these initial steps today and I’ll post little updates of the settings and configs I’ve rolled into Nagios over time to monitor different hardware and software platforms.

Step 1 for me was to install a minimal Centos 7 install. I’m a long time user of Centos and this is a default starting point for so many of my projects. After that initial install I usually install a few tools I like to have around and lower some of the security on the system to prevent myself getting stuck solving an issue that isn’t within the application.

For me, I run this command on basically all of my Centos 7 installs:

yum update -y && yum install epel-release -y && yum install htop wget mlocate -y && updatedb && systemctl disable firewalld && vi /etc/selinux/config && reboot

This command does the following:
1.) Updates all packages on the system
2.) Install the EPEL repository, for additional packages I like to have
3.) Installs several packages I like to have, some of which are in the EPEL repo, hence why it’s third in the list.
4.) updatedb is the command that builds a database of files on the system for easy/fast searching. Once the package mlocate is installed you can run it, and then use the ‘locate’ command to find files.
5.) This disables the firewall, which I leave off for the initial config of the machine. Once the machine is working, I switch it back on and open only the ports required. This, for me, is faster then guessing back and forth when something it’s working.. is it the software? Firewall? I’m never sure. I would only do this on a machine not on the public internet.
6.) This command launches vi so I can edit the selinux level on the system. I change it to permissive while I setup the server and then make the necessary change to selinux policy once the system has stabilized. I then move this back to ‘enforcing’. This isn’t a step to take lightly and is very important.
7.) Once you save and quit vi the system will process the final command and reboot the machine.

When the system returns to service you’re ready to start installing and setting up Nagios. With EPEL already setup, it’s just a couple quick commands.

Firstly, you need to install the required packages. This can be done with one command:

yum install nagios nagios-plugins-all

Once done, you need to enable and start the two required servies, httpd (Apache) and nagios.

systemctl enable nagios httpd
systemctl start nagios httpd

Once done, it’s best to setup a password for the default nagiosadmin user, and setup any other users you might want. To setup the nagiosadmin user:

htpasswd /etc/nagios/passwd nagiosadmin

To create a new user:

htpasswd /etc/nagios/passwd (**User Name**)

With this complete you're all set. Nagios is online and working. You can access it on your server at:

http://(**server IP Address**)/nagios

You'll see a couple standard checks running on 'localhost', one of which is in an error state due to how Centos configures httpd. Nothing to be concerned about.. I actually delete that check. Next time, we'll talk more about making Nagios do something useful, like monitoring something beyond itself.

Libresonic on Centos 7 with SSL

June 7th, 2017

I’ve been a happy iTunes Match user since the service was introduced. I have a large music collection and I don’t always want to dedicate the required amount of iPhone and computer storage to keeping it available all the time on all my devices. iTunes Match lets Apple deal with storing the whole thing and allows me to just download what I want on a given device or stream music I own to any device I’d like. It’s been $25/year well spent.

That being said, with streaming music plans taking over the market, I can’t imagine Apple’s going to want to offer this service forever, plus I prefer to self-host as much of my digital needs as possible. Finding a way to replicate these features though has proven a bit tough. There were a few projects that would allow me to stream my music but I don’t want to use up my whole data plan streaming songs over and over again or have my music drop out every time I’m in a dead zone. Thankfully, I found Libresonic, which in concert with an app on my iPhone checks a lot of these boxes.

I installed Libresonic several times before I was happy with the install and I wanted to write-up a quick post so I’ll be able to do it again when the server I’m hosting it on gets replaced.

My final install has Libresonic running on the same web server I use for a few other tasks and with the music library linked via a samba share, instead of copied locally. Additionally, I am proxying the tomcat web server on port 8080 through my apache config, which is setup for SSL.

Step one is to installed OpenJDK and tomcat. This is a pretty simple command:
yum install openjdk tomcat
Once done, it’s basically good to go. You can enable tomcat, which is a java based web server, to start on boot.
systemctl enable tomcat

Now, you need to download the ‘war’ file for Libresonic. Head over to this link: http://libresonic.github.io/download/, and download the ‘WAR Version’, not the Standalone Version.

Move the file to where Tomcat is expecting it, which is /var/lib/tomcat/webapps/ on my Centos 7 server. Be sure to rename the file to just ‘libresonic.war’ instead of ‘libresonic-vXX.war’ as the name of the file, before the extension, is the location it will be served from. If you leave the file as ‘libresonic-vXX’ it will be available at http://your.server:8080/libresonic-vXX.

Once that is in place, create a folder and make it owned by the tomcat user for libresonic to use for config data.

mkdir /var/libresonic
chown -Rfv tomcat:tomcat /var/libresonic

Now, we can finally start tomcat, which will bring up Libresonic.

systemctl start tomcat

Now, keep an eye on the logs your system is generating to know when it’s online. Even though tomcat will start, Libresonic will take a minute or more to fully be ready.

journalctl -f

Keep watching until you see something in the log that says how long the app took to start.

Once done, head over to localhost, or your server’s IP, followed by the port and sub-location. Of course, make sure firewall rules are set properly, etc. For instance: http://localhost:8080/libresonic.

With that working, I turned to make it work via my apache config. I am running Apache with a cert from Let’s Encrypt. I edited my ssl.conf file and added the following two lines under the applicable ‘VirtualHost’:


ProxyPass /libresonic http://127.0.0.1:8080/libresonic
ProxyPassReverse /libresonic http://127.0.0.1:8080/libresonic

This bit of code takes the internal tomcat server on the localhost and makes it available via Apache, over SSL. It allows the server to respond when I try to access Libresonic. For instance, if I try to access https://my.awesome.server/libresonic Apache is able to re-point that request through the tomcat server, which is hosting at http://127.0.0.1:8080/libresonic.

With that done, I was able to get to the Libresonic page, login, and setup the system.

To give it access to my music collection, I mounted a SMB share from my main storage server and then symlink’d the full path down to a simpler location, to get rid of any issues with folder names with spaces, permissions, etc.

My only recommendation in this area would be to make sure you mount the SMB share, via fstab, with permissions for the tomcat user. In my case I added a few extra items to the config line in the server’s fstab file uid=tomcat,gid=tomcat to make sure the mounted share can be read by the user that will be accessing it. Then, when I posted the libresonic server the relevant folder, it was able to read the data there.

With that all done, I was all set. I can access my server, via SSL, and use apps on my smart phone to talk to the server as well as download music for offline listening. I am, right now, using SubStreamer on my iPhone and on Android.

When the Photo Bug Bites

May 24th, 2017

I remember the first time I saw a moment that I knew I had to photograph. I wonder if other photographers are the same.

I was in Maine, spending a weekend with a friend at his lake side cabin. I woke up on a cool late autumn morning and the whole lake was covered in rising mist. The water was much warmer than the cool dry air and up came the moisture.

My only camera was a 3 megapixel ‘Kodak Easy Share’ I had gotten for Christmas a few years before. This was turn of the century, inexpensive, pocket camera technology at its most average. It was a fine snapshot camera, and I was glad to have received it as a gift, but I never expected that misty lake would impact my life to such a degree.

After that trip, I did a bit of research and bought myself a little Canon point and shoot camera. It was 8 mega pixels and had a bit of zoom ability built-in. I fully expected to love that camera and never need to upgrade again. In the end though, I found it very frustrating.

I didn’t know the massive amount of information I didn’t know at that point and was very disappointed. I never took any pictures with that camera that I really liked. The zoom and the image quality were fine, in good lighting, but the lack of image stabilization was a real issue; I didn’t even know it existed as a technology at that time. I found the controls, and the general slowness of the camera, just as painful as the Kodak I had been using. I carried it around with me, but I never really enjoyed it. My interest waned.

Then, Christmas Eve came along. One of the pastors of my church had bought a Nikon D80 but couldn’t use it during the service since he need to be up front. He said I could use it if I wanted to. I was on sound and projection duty up in the balcony so I was in a pretty good spot to take some pictures of the service and the traditional candle lighting.

I’d never even seen an SLR camera up close so he gave me a few pointers and showed me how to zoom, the focus points, etc., and then went to start the service.

I put my eye to the eye piece, zoomed in a bit and pressed the shutter button. Much to my surprise, the camera actually fired when I told it to, not a few seconds later, and the image that popped up on the back of the camera was exactly what I was expecting.

It was immediately clear to me that my foray into point and shoot cameras was over and that a DSLR would need to be purchased. Finally, I found a camera that got out of my way and let me capture the images I could see in my mind, instead of the blurry, delayed mess from my point and shoots.

And so began what is now 12 years and counting of shooting Nikon. First, the D40. Then the D300. What a beast! I loved that thing! I sold it though and replaced it with a D7000, in preparation for lending it to my dad for a trip to Alaska. Sending him on a once in a lifetime trip with a camera lacking any auto-modes didn’t seem wise. I lent the D40 to a friend and it was unfortunately stolen, but I kept the D7000 for several years. I never bonded with it like I had my D40 and D300, and finally, due in no small part to the persistence of my wife, pulled the trigger on the D500.

What a delight that machine is. I am happy to invest the time in configuring the camera, and learning how to quickly reconfigure it as my subjects change, but once I’ve set it up to suit the situation I just want it to get out of the way. The D500 does that for me, without the full weight and expense of full frame bodies and lenses.

Automated Backups of OpnSense

January 31st, 2017

I use rsnapshot, on Centos 7, to manage the vast majority of my backups across a myriad of linux servers both within my network and across the globe. I’ve never blogged about the entirety of that process, but I wanted to post a quick note about how I use rsnapshot to also backup the configuration of my router.

Until recently, I had been using this process to backup my pfSense routers. With my switch to OPNsense though, I was pleased to see the process is the same.

Basically, we just need to make a copy of a single folder on the machine, ‘/conf’, which is located right off of the root on both pfSense an OpnSense.

Step one is to make sure you have a user setup on the router that can connect over SSH. For OpnSense you need to log into the web portal and navigate to System -> Settings -> Administration. Check the box to ‘Enable Secure Shell’ and ‘Permit Password login’. (We’ll disable password logins in a moment)

Next, goto System -> Access -> Users and create a new user that you will use for SSH access. Once created, you then need to also give the user the ‘effective permission’ of ‘System: Shell Account Access’ which is just below the ‘Group Membership’ Section of the user’s account details.

Now, to allow rsnapshot to login as the user without prompting for a password you need to copy over to the router the public SSH key from the machine and user that runs rsnapshot. I prefer to use the ‘ssh-copy-id’ tool to do that. On your rsnapshot server, logged in as the user who runs rsnapshot, run ‘ssh-copy-id username@router_ip’. Enter your password and it will copy the required keys. The username to use is the one you created in the previous step on the router. The IP address should be the IP address of the router. I don’t allow external SSH access to my router, so I use the internal IP.

You can verify that the copy-id tool worked by trying to ssh into the router ssh username@router_ip. It should take a moment, and then drop you at a terminal prompt on the router. You can now take a look around and see what the system has to offer. You might do cd /conf and then ls to see the config file and history.

You can now disable ‘Permit Password Login’, since the user required for running the backup is setup to login without interactively entering a password.

With that part done you now need only update your rsnapshot.conf file; Don’t forget that the file uses ‘‘ to delineate options, not spaces, commas, etc..

I added a line to my rsnapshot.conf that says the following:

backup username@router_ip:/conf/ WDR-Router

The ‘WDR-Router’ piece at the end is the folder the resulting files are kept in with the rsnapshot folder structure on my server.

With that done, your router config files will be included in your backup schedule. Should something happen to your router, like hardware failure or a bad config gets pushed out, you can quickly recover.

Worst case scenario, you can just setup a new, clean OpnSense install and load your most recent config.xml file through the web portal via System -> Configuration -> Backups.

Good luck!

Cleaning up old Logs on Centos 7

January 16th, 2017

As often happens with computers of all types, log files build up over time. Generally speaking, the operating system will rotate these logs, which means it breaks them up into chunks, but it isn’t usually set to remove the old chunks. Over time, those log files can start to add up.

I am giving some thought to changing the provider of the VPS that hosts this web page, since performance is not consistent and the VPS is not very reliable, and I was curious to know how much disk space I needed for the system to run well. After doing some light clean up on the system I did a check to see where on the disk I was using the most space.

Using the following command, I did a search to find out which folders contained the most data:

du -a / | sort -n -r | head -n 10

This command reported that the folders using the most storage on my system were the folders containing log files, ‘/var/log/’. I cleaned a few of them up by hand, since I like to hold onto certain items for longer than others, but one folder in particular, /var/log/journal/ contained several GBs of data. This is the folder that holds the log files SystemD creates and contains basically every log entry for the system since its initial boot. I considered cleaning it up manually, but I wanted to know if I could keep this data in check going forward, which would help me run the webserver on a smaller disk. What I discoverd was a simple command to clear out logs older than a certain date, or larger then a certain size.

journalctl --vacuum-time=14d
or
journalctl --vacuum-size=1G

The above command looks through all the log files and purges any that are either old than the time set, or larger than the limit set.

I ran the command above and told it to keep the most recent 14 days worth of logs, which deleted 5.9GB of old log data on my system. I then added the command to crontab so it would maintain the logs at that level:

2 1 * * * root journalctl –vacuum-time=14d

This command runs the vacuum command each day at 2 minutes past 1AM.

My server now uses 2GB of storage, instead of 8GB. Not a bad result…

RT @BrentButt: IT’S OFFICIAL!!! CORNER GAS: ANIMATED is a real thing!!! (And I’m loving how it’s coming together!) https://t.co/1tb6ieA8…

December 19th, 2016

Next »