So…containers. Why? How? What? Start here if you haven’t.

I tweeted a link today about running ceph inside of Docker, something that I would like to give a shot (mostly because I want to learn Docker more than I do, and I’ve never played with ceph, and it has a lot of interesting stuff going on):

I got to thinking about it, and realized that I haven’t written much about Docker, or even containers in general.

Containers are definitely the new hotness. Kubernetes just released 1.0 today, Docker has taken the world by storm, and here I am, still impressed by my Vagrant-fu and thinking that digital watches are a pretty neat idea. What happened? Containers? But, what about virtualization? Aren’t containers virtualization? Sort of? I mean, what is a container, anyway?

Lets start there before we get in too deep, alright?

UNIX (and by extension, Linux) has, for a long time, had a pretty cool command called ‘chroot‘. The chroot command allows you to point at an arbitrary directory and say “I want that directory to be the root (/) now”. This is useful if you had a particular process or user that you wanted to cordon off from the rest of the system, for example.
This is a really big advantage over virtual machines in several ways. First, it’s not very resource intensive at all. You aren’t emulating any hardware, you aren’t spinning up an entire new kernel, you’re just moving execution over to another environment (so executing another shell), plus any services that the new environment needs to have running. It’s also very fast, for the same reason. A VM may take minutes to spin up completely, but a lightweight chroot can be done in seconds.

It’s actually pretty easy to build a workable chroot environment. You just need to install all of the things that need to exist for a system to work properly. There’s a good instruction set on, but it’s a bit outdated (from 2010), so here’s a quick set of updated instructions.

Just go to Digital Ocean and spin up a quick CentOS 7 box (512MB is fine) and follow along:

# rpm –rebuilddb –root=/var/tmp/chroot/
# cd /var/tmp/chroot/
# wget
# rpm -i –root=/var/tmp/chroot –nodeps ./centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm
# yum –installroot=/var/tmp/chroot install -y rpm-build yum
# cp /var/tmp/chroot/etc/skel/.??* /var/tmp/chroot/root/

At this point, there’s a pretty functional CentOS install in /var/tmp/chroot/ which you can see if you do an ‘ls’ on it.

Lets check just to make sure we are where we think we are:

# pwd

Now do the magic:

# chroot /var/tmp/chroot /bin/bash -l

and voila:

# pwd

How much storage are we paying for this with? Not too much, all things considered:

[root@dockertesting tmp]# du -hs
436M .

Not everything is going to be functional, which you’ll quickly discover. Linux expects things like /proc/ and /sys/ to be around, and when they aren’t, it gets grumpy. The obvious problem is that if you extend /proc/ and /sys/ into the chrooted environment, you expose the outside OS to the container…probably not what you were going for.

It turns out to be very hard to secure a straight chroot. BSD fixed this in the kernel by using jails, which go a lot farther in isolating the instance, but for a long time, there wasn’t an equivalent in Linux, which made escaping from a chroot jail something of a hobby for some people. Fortunately for our purposes, new tools and techniques were developed for Linux that went a long way to fixing this. Unfortunately, a lot of the old tools that you and I grew up with aren’t future compatible and are going to have to go away.

Enter the concept of cgroups. Cgroups are in-kernel walls that can be erected to limit resources for a group of processes. They also allow for prioritization, accounting, and control of processes, and paved the way for a lot of other features that containers take advantage of, like namespaces (think cgroups, but for networks or processes themselves, so each container gets its own network interface, say, but doesn’t have the ability to spy on its neighbors running on the same machine, or where each container can have its own PID 0, which isn’t possible with old chroot environments).

You can see that with containers, there are a lot of benefits, and thanks to modern kernel architecture, we lose a lot of the drawbacks we used to have. This is the reason that containers are so hot right now. I can spin up hundreds of docker containers in the time that it takes my fastest Vagrant image to boot.

Docker. I keep saying Docker. What’s Docker?

Well, it’s one tool for managing containers. Remember all of the stuff we went through above to get the chroot environment set up? We had to make a directory, force-install an RPM that could then tell yum what OS to install, then we had to actually have yum install the OS, and then we had to set up root’s skel so that we had aliases and all of that stuff. And after all of that work, we didn’t even have a machine that did anything. Wouldn’t it be nice if there was a way to just say the equivalent of “Hey! Make me a new machine!”? Enter docker.

Docker is a tool to manage containers. It manages the images, the instances of them, and overall, does a lot for you. It’s pretty easy to get started, too. In the CentOS machine you spun up above, just run

# yum install docker -y
# service docker start

Docker will install, and then it’ll start up the docker daemon that runs in the background, keeping tabs on instances.

At this point, you should be able to run Docker. Test it by running the hello-world instance:

[root@dockertesting ~]# docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from
a8219747be10: Pull complete
91c95931e552: Already exists The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd18681cf5daeb82aab55838d
Status: Downloaded newer image for
Usage of loopback devices is strongly discouraged for production use. Either use `–storage-opt dm.thinpooldev` or use `–storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

For more examples and ideas, visit:

The message is pretty explanatory, but you can see how it understood that you asked for something that it didn’t immediately know about, searched the repository at the Docker Hub to find it, pulled it down, and ran it.

There’s a wide world of Docker out there (start with the Docker docs, for instance, then maybe read Nathan LeClaire’s blog, among others).

And Docker is just the beginning. Docker’s backend uses something called “libcontainer” (actually now or soon to be runc, of the Open Container format), but it migrated to that from LXC, another set of tools and API for manipulating the kernel to create containers. And then there’s Kubernetes, which you can get started with on about a dozen platforms pretty easily.

Just remember to shut down and destroy your Digital Ocean droplet when you’re done, and in all likelihood, you’ve spent less than a quarter to have a fast machine with lots of bandwidth to serve as a really convenient learning lab. This is why I love Digital Ocean for stuff like this!

In wrapping this up, I feel like I should add this: Don’t get overwhelmed. There’s a TON of new stuff out there, and there’s new stuff coming up constantly. Don’t feel like you have to keep up with all of it, because that’s not possible to do while maintaining the rest of your life’s balance. Pick something, and learn how it works. Don’t worry about the rest of it – once you’ve learned something, you can use that as a springboard to understanding how the next thing works. They all sort of follow the same pattern, and they’re all working toward the same goal of rapidly-available short-lived service providing instances. They use different formats, backends, and so on, but it’s not important to master all of them, and feeling overwhelmed is a normal response to a situation like this.

Just take it one step at a time and try to have fun with it. Learning should be enjoyable, so don’t let it get you down. Comment below and let me know what you think of Docker (or Kubernetes, or whatever your favorite new tech is).

Are you monitoring your switchports the right way?

Graphite might be the best thing I’ve rolled out here in my position at CCIS.

One of our graduate students has been working on a really interesting paper for a while. I can’t go into details, because he’s going to publish before too long, but he has been making good use of my network diagrams. Since he has a lot riding on the accuracy of the data, he’s been asking me very specific questions about how the data was obtained, and how the graphs are produced, and so on.

One of the questions he asked me had to do with a bandwidth graph, much like this one:

His question revolved around the actual amount of traffic each datapoint represented. I explained briefly that we were looking at Megabytes per second, and he asked for clarification – specifically, whether each point was the sum total of data sent per second between updates, or whether it was the average bandwidth used over the interval.

We did some calculations, and decided that if it were, in fact, the total number of bytes received since the previous data point, it would mean my network had basically no traffic, and I know that not to be the case. But still, these things need verified, so I dug in and re-determined the entire path that the metrics take.

These metrics are coming from a pair of Cisco Nexus Switches via SNMP. The data being pulled is a per-interface ifInOctets and ifOutOctets. As you can see from the linked pages, each of those are 32 bit counters, with “The total number of octets transmitted [in|out] of the interface, including framing characters”.

Practically speaking, what this gives you is an ever-increasing number. The idea behind this counter is that you query it, and receive a number of bytes (say, 100). This indicates that at the time you queried it, the interface has sent (in the case of ifOutOctets) 100 bytes. If you query it again ten seconds later, and you get 150, then you know that in the intervening ten seconds, the interface has sent 50 bytes, and since you queried it ten seconds apart, you determine that the interface has transmitted 5 bytes per second.

Having the counter work like this means that, in theory, you don’t have to worry about how frequently you query it. You could query it tomorrow, and if it went from 100 to 100000000, you could be able to figure out how many seconds it was since you asked before, divide the byte difference, and figure out the average bytes per second that way. Granted, the resolution on those stats isn’t stellar at that frequency, but it would still be a number.

Incidentally, you might wonder, “wait, didn’t you say it was 32 bits? That’s not huge. How big can it get?”. The answer is found in RFC 1155: Counter

This application-wide type represents a non-negative integer which monotonically increases until it reaches a maximum value, when it wraps around and starts increasing again from zero. This memo specifies a maximum value of 2^32-1 (4294967295 decimal) for counters.

In other words, 4.29 gigabytes (or just over 34 gigabits). It turns out that this is actually kind of an important facet to the whole “monitoring bandwith” thing, because in our modern networks, switch interfaces are routinely 1Gb/s, often 10Gb/s, and sometimes even more. If our standard network interfaces can transfer one gigabits per second, then a fully utilized network interface can roll over an entire counter in 35 seconds. If we’re only querying that interface once a minute, then we’re potentially losing a lot of data. Consider, then, a 10Gb/s interface. Are you pulling metrics more often than once every 4 seconds? If not, you may be losing data.

Fortunately, there’s an easy fix. Instead of ifInOctets and ifOutOctets, query ifHCInOctets and ifHCOutOctets.  They are 64 bit counters, and only roll over once every 18 exabytes. Even with a 100% utilized 100Gb/s interface, you’ll still only roll over a counter every 5.8 years or so.

I made this change to my collectd configuration as soon as I figured out what I was doing wrong, and fortunately, none of my metrics jumped, so I’m going to say I got lucky. Don’t be me – start out doing it the right way, and save yourself confusion and embarrassment later.  Use 64-bit counters from the start!

(Also, there are the equivalent HC versions for all of the other interface counters you’re interested in, like the UCast, Multicast, and broadcast packet stats – make sure to use the 64-bit version of all of them).

Thanks, I hope I managed to help someone!

New Blog Theme is Up

The other day, I got an email from a loyal reader who let me know that, when you find Standalone SysAdmin on Google, you get a little warning that says, “This site may be hacked”. Well, that’s not good. It was clearly related to some adware that got installed a while back by accident (where “accident” means that WordPress allowed someone to upload a .htaccess file into my blog’s uploads directory, and then used that as a springboard for advertising Casino sites). Somehow, the vulnerability was severe enough that the wp-footer.php site was able to be modified as well. Ugh.

After cleaning up the uploads directory and removing the extraneous plugins that I wasn’t using, or didn’t like, or whatever, I went to clean up the footer, but I thought, “I wonder why my security scans didn’t pick this up?”. The blog runs Wordfence thanks to recommendations from several other bloggers, and it’s really good software. It was able to help me with cleaning up most of the modified files because it can compare the originals with what’s installed, and revert to the reference copy. But it didn’t do that with my theme.

As it turns out, the theme I was running was an antique one named Cleaker. I really liked how it looked at the time I installed it, which was back in 2009 when I migrated from Blogger. Well, you can’t even download Cleaker anymore, at least from the author. So why not use this as a chance to put something new in place?

Funny, that “Welcome to the Future” post mentions RSS, but practically no one reads blogs through RSS anymore. Oh, there are still some, but when Google killed Reader, RSS readers dropped almost off the radar, so you’re almost certainly reading this on the actual site, and that means you’re almost certainly seeing the new site theme. It’s a fairly standard one called Twenty Fourteen, which I’m sure will feel hilariously old when I get around to changing it in 2020. The banner image at the top rotates occasionally between several images that I’ve taken, or taken from, or from some other Free-Without-Attribution source. There are seriously tons of free image sources out there, if you’re looking.

Anyway, I just wanted to let you know that the look of the site has changed, so when I inevitably get around to writing more content, this is what you’ll be seeing. Unless, of course, everyone comments below to let me know that it sucks, which, if you think so, please do tell me. There are comments below for that kind of thing. If you like it, don’t like it, or are so ambivalent about it that you want to comment and let me know that too, then go for it. Thanks!

A blog for IT Admins who do everything by an IT Admin who does everything