“Why doesn’t this just work?!” is something I’m apt to say about all kinds of devices or systems more complex than a mechanical lever. When problems arise, if it isn’t due to normal wear or obvious damage, the root cause usually just amounts to my ignorance of someone else’s engineering design decision. I’d argue this isn’t necessarily always my fault. It is not easy to truly understand all the nuances of a large system, especially one you didn’t construct. Enter the art of debugging. For better or worse, complexity is what keeps many sys admins, programmers, network engineers, and security practitioners gainfully employed. Here is a short debugging story where my expectations collided with the design of a minor, otherwise inconsequential change someone applied to a default operating system build.
On many Linux-based systems a set of system scripts will run hourly,
daily, weekly, or monthly. This is accomplished with the help of a
small utility called from the system crontab. According to the
run-parts man
page,
run-parts essentially “runs all the executable files” in a specified
directory. So for example, on a Debian-based system, here is part of
what the system /etc/crontab
might contain:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
17 * * * * root cd / && run-parts --report /etc/cron.hourly
Scripts in the /etc/cron.hourly
directory will be executed at 17
minutes past the hour per this configuration. On dozens of systems with
exactly this hourly schedule I put a simple, custom script in the
cron.hourly directory. Sure enough, my script runs at exactly 17
minutes past the hour on almost of them. Almost. The exceptions were
VMs from one specific hosting provider we’ll call Provider X. The
script ran every hour on all systems at Provider X, but none of them ran
at 17 minutes past the hour. In fact, on each of them, they all ran at
different times every hour. On one system it ran at 23 minutes past the
hour, on another it was 46 minutes past the hour, and so on. What the
heck is wrong with these VMs from Provider X?
My first assumption was that is was a clock issue. However, the system clocks and all the timestamps in the logs looked fine. I then moved on to some advanced troubleshooting tricks. Yep, power off an on again. Let’s face it, sometimes those lazy troubleshooting techniques can fix a lot of problems, not necessarily permanently, but they can be sufficiently satisfying if it allows us to move on assuming we accept the occasional inconvenience. An OS is a complex system and this little anomaly is not the kind of thing I wanted to spend a lot of time fooling with so it was worth a shot. Unfortunately, this didn’t help. I thought maybe a software or OS package just ended up in a weird state so I even tried re-installing some things. Still no luck. It wasn’t going to be that easy.
I was resigned to let the hourly script just run at odd times on these systems for months. I took a look at it again once or twice briefly before giving up each time. I considered giving up on this hosting provider altogether, thinking that something was just weird with their systems that I couldn’t trust. Then one day I happened to notice the hourly run-parts was in the process list. On all my other systems, the hourly run-parts doesn’t stick around long enough for me to even see it running. Aha, a clue. Is it stalled? Is it churning? Blocked? I made myself root and ran the crontab just as root would with one small difference:
sudo su -
cd / && strace run-parts --report /etc/cron.hourly
I had installed strace
so I could see exactly what would happen when this
process ran. This is when I noticed something strange (abbreviated output for
clarify):
stat("/usr/local/sbin/sleep", 0x7ffe1ec55a70) = -1 ENOENT (No such file or directory)
stat("/usr/local/bin/sleep", 0x7ffe1ec55a70) = -1 ENOENT (No such file or directory)
stat("/usr/sbin/sleep", 0x7ffe1ec55a70) = -1 ENOENT (No such file or directory)
stat("/usr/bin/sleep", {st_mode=S_IFREG|0755, st_size=43808, ...}) = 0
stat("/usr/bin/sleep", {st_mode=S_IFREG|0755, st_size=43808, ...}) = 0
Aha! There is a call somewhere to sleep! However, run-parts does not do this.
It doesn’t have an option to sleep for some period of time before running the
scripts in the specified directory. I had looked at the run-parts on this
system before and it seemed fine. What did I miss? It was just the stock
run-parts. But it was and it wasn’t. When I looked at run-parts previously, I
was looking at it from my unprivileged user perspective. The stock run-parts
is usually in /bin
or /usr/bin
. Scroll back up and take a look at the
PATH
environment in the system crontab above. A few directories precede both
/bin and /usr/bin. Guess what? Yep, there was another run-parts in /sbin and
when the system crontab ran, root would see this one first. The /sbin
run-parts was not even a binary, but a shell script:
#!/bin/bash
parameters="$@"
ip=$(hostname -i)
RANDOM=${ip//./}
test -z "${parameters##*/etc/cron.*}" &&
sleep $((RANDOM % 3600))
exec /bin/run-parts "$@"
There that sleep is. This script takes the parameters from the system crontab and will call the real run-parts with them, but not before it does the following:
- Gets an IP address of the system (hostname -i)
- Removes the dots in the (presumably) IPv4 address from step 1
- Checks to make sure the /etc/cron string is in the argument list
- Sleeps 0-3599 seconds depending on a calculation using step 2’s value
In a nutshell, the IPv4 address of the system is used to compute a random number of seconds, from zero to 59 minutes and 59 seconds maximum. So if the IP address was 192.0.2.1, the value calculated from 192021 % 3600 is 1221. After sleeping for 1221 seconds, the real run-parts would run a little more than 20 minutes later or 37 minutes past the hour.
The next obvious question is… why? Why is this script-based run-parts here? Where did it come from? I didn’t ask the provider, but I think I know the reason. The provider or the OS template provider probably thought it would be a good idea to stagger system cron jobs in their cloud-based virtualization environment to avoid synchronized CPU, disk, and memory spikes across a number of VMs on a single host.
I’ve never seen anyone but hosting Provider X do this, so it was a very unique anomaly. To me it was annoying, because I wanted the hourly crontab across all my systems on all providers to run at essentially the same time. If my hypothesis about why it got there is correct, I could understand and appreciate the reasoning, but other than this blog post, it was kind of annoying to have to spend any time on figuring this out. Ultimately it did just work, it just didn’t work as I expected until I spent way more time than I would have liked understanding someone’s unique, presumably undocumented, to me at least, design decision.