As often happens with computers of all types, log files build up over time. Generally speaking, the operating system will rotate these logs, which means it breaks them up into chunks, but it isn’t usually set to remove the old chunks. Over time, those log files can start to add up.
I am giving some thought to changing the provider of the VPS that hosts this web page, since performance is not consistent and the VPS is not very reliable, and I was curious to know how much disk space I needed for the system to run well. After doing some light clean up on the system I did a check to see where on the disk I was using the most space.
Using the following command, I did a search to find out which folders contained the most data:
du -a / | sort -n -r | head -n 10
This command reported that the folders using the most storage on my system were the folders containing log files, ‘/var/log/’. I cleaned a few of them up by hand, since I like to hold onto certain items for longer than others, but one folder in particular, /var/log/journal/ contained several GBs of data. This is the folder that holds the log files SystemD creates and contains basically every log entry for the system since its initial boot. I considered cleaning it up manually, but I wanted to know if I could keep this data in check going forward, which would help me run the webserver on a smaller disk. What I discoverd was a simple command to clear out logs older than a certain date, or larger then a certain size.
The above command looks through all the log files and purges any that are either old than the time set, or larger than the limit set.
I ran the command above and told it to keep the most recent 14 days worth of logs, which deleted 5.9GB of old log data on my system. I then added the command to crontab so it would maintain the logs at that level:
2 1 * * * root journalctl –vacuum-time=14d
This command runs the vacuum command each day at 2 minutes past 1AM.
My server now uses 2GB of storage, instead of 8GB. Not a bad result…