Ad Astra Per Aspera – Leaving Boston

northeasterneduI really like working at Northeastern University, which is why I’m sad that I’m going to be leaving. On the other hand, life occasionally presents an opportunity to you that can’t ignore. This is one of those occasions.

A few months ago, I was sitting in a small room full of sysadmins planning LISA’15 when I mentioned, almost out of nowhere, that there was one company in the world that I would kill to work at. As luck would have it, my friend sitting next to me said, “Really? Because I know a guy. Want me to email him for you?” and I said, “Um, yes, please. ” Thus a story began that included numerous phone screenings, flying out to Los Angeles, and an all-day array of in-person interviews, the net result being that I am leaving Boston, moving to LA, and going to work…for Space Exploration Technologies Corporation, otherwise known as SpaceX. Yes, THAT SpaceX.



At SpaceX, I’m going to be a Linux System Administrator, and from the sounds of it, I’ll be splitting my time between “normal” infrastructure stuff and helping to define a DevOps role with the Flight Software team who write the software that sends the rocket and Dragon capsule up to the Space Station. It’s…pretty difficult to overstate how excited I am.


I imagine that it will take a while to figure out what I’m allowed to write about, but the whole team was very enthusiastic about my visibility in the SysAdmin space, and they seemed to enjoy my blog and the fact that I took part in the community, so I don’t think anything there will change. I’m just really happy to get the chance to do this, for a company with a mission like SpaceX. It’s an incredible opportunity, and I feel very fortunate.

So here we go, on a brand new adventure. I’m sad to be leaving my friends in Boston, but I’ll be back soon – I mean heck, LISA’16 is in Boston, so it’ll be like a homecoming, right? Until then, the sky is the limit! Keep reading!



Stop Hating Your Work

I love meeting people at SysAdmin events. Having a blog that people read does mean that people have mostly heard all of my best stories, but it’s still fun getting to know new people and hearing what they’ve been working on. The single thing I hear most often is a question, and the question is, “Don’t you sleep?”

Time and time again, people will read my blog, see me making things, or doing things, or organizing, or whatever, and internally, they compare that to what they do, and they feel like they aren’t doing enough, or as much as I am.

Can I let you in on a secret? I feel like I do crap work most of the time. And I compare myself to others, and to their work, and I feel like what I do is often bad, sub-par, and not worthy.

Do you ever see something that just speaks to your soul? I saw a Tweet, of all things, that did that to me last year. Here it is:

The image from that post features the very first Iron Man suit from Tales of Suspense #39 in 1959, which Tony Stark built in a cave, with a box of scraps. It worked…to a point, but it wasn’t long before it got upgraded and replaced. If you’ve seen the first Iron Man
movie starring Robert Downey Jr, then this will all sound pretty familiar, because it was recreated in film.

It feels sort of childish to admit in an open forum like this, but the story of Tony Stark creating Iron Man is actually really inspirational to me. I like making things. I like building, and doing, and I really, really hate just about everything I create. Especially the early stuff, and Tony embodies the concept of continuous development and iterative improvement that are so vital to making things in 2015. So I try to learn from it, and in my spare time, I try to figure out how repulsor beams work on pure electrical charge.

Earlier this year, I decided that I was going to go to Boston Comic Con for the second year in a row. When I checked out the website, I couldn’t believe my eyes – along with the normal array of comics celebs, Boston was going to be playing host to none other than STAN LEE!

If you don’t know the name Stan Lee, you probably know the characters that he’s made – Spiderman, The X-Men, The Incredible Hulk, Daredevil, Thor, and yes, Iron Man. When I saw that Stan Lee was going to be signing autographs, I knew I had to get one, but the only question was…what would I get signed?

I could always go get a relatively rare Iron Man comic and have him sign that. But none of the individual comics meant as much to me as the character itself. What would be perfect is if I could get that picture from Alexis’s picture above signed, but it’s a PNG, and the quality didn’t really lend itself to blowing up. After thinking for a few minutes, I realized, I didn’t have to use the picture above – I could just recreate it. So I did!

It took me a few hours to get it to the point where I thought it would be acceptable, and fittingly, it isn’t perfect, but here’s the final version that I made:

Click the image above to get the full-sized image. If you want to print your own (don’t sell this – Iron Man is the property of Marvel), you can download the EPS in glorious 41MB fashion from this link.

So yesterday, I visited Comic Con, stood in line for hours, and got to (very briefly) meet Stan Lee, who laughed as he signed his name to my new poster:


I actually printed out two versions – one to keep at work, and this signed one, which I’ll keep at home. Both of them will remind me that, even though I’m probably not happy with the state of whatever I’m working on at the moment, I shouldn’t listen to the negative voices in my head telling me to quit because it isn’t good enough. Thanks Stan!

So…containers. Why? How? What? Start here if you haven’t.

I tweeted a link today about running ceph inside of Docker, something that I would like to give a shot (mostly because I want to learn Docker more than I do, and I’ve never played with ceph, and it has a lot of interesting stuff going on):

I got to thinking about it, and realized that I haven’t written much about Docker, or even containers in general.

Containers are definitely the new hotness. Kubernetes just released 1.0 today, Docker has taken the world by storm, and here I am, still impressed by my Vagrant-fu and thinking that digital watches are a pretty neat idea. What happened? Containers? But, what about virtualization? Aren’t containers virtualization? Sort of? I mean, what is a container, anyway?

Lets start there before we get in too deep, alright?

UNIX (and by extension, Linux) has, for a long time, had a pretty cool command called ‘chroot‘. The chroot command allows you to point at an arbitrary directory and say “I want that directory to be the root (/) now”. This is useful if you had a particular process or user that you wanted to cordon off from the rest of the system, for example.
This is a really big advantage over virtual machines in several ways. First, it’s not very resource intensive at all. You aren’t emulating any hardware, you aren’t spinning up an entire new kernel, you’re just moving execution over to another environment (so executing another shell), plus any services that the new environment needs to have running. It’s also very fast, for the same reason. A VM may take minutes to spin up completely, but a lightweight chroot can be done in seconds.

It’s actually pretty easy to build a workable chroot environment. You just need to install all of the things that need to exist for a system to work properly. There’s a good instruction set on, but it’s a bit outdated (from 2010), so here’s a quick set of updated instructions.

Just go to Digital Ocean and spin up a quick CentOS 7 box (512MB is fine) and follow along:

# rpm –rebuilddb –root=/var/tmp/chroot/
# cd /var/tmp/chroot/
# wget
# rpm -i –root=/var/tmp/chroot –nodeps ./centos-release-7-1.1503.el7.centos.2.8.x86_64.rpm
# yum –installroot=/var/tmp/chroot install -y rpm-build yum
# cp /var/tmp/chroot/etc/skel/.??* /var/tmp/chroot/root/

At this point, there’s a pretty functional CentOS install in /var/tmp/chroot/ which you can see if you do an ‘ls’ on it.

Lets check just to make sure we are where we think we are:

# pwd

Now do the magic:

# chroot /var/tmp/chroot /bin/bash -l

and voila:

# pwd

How much storage are we paying for this with? Not too much, all things considered:

[[email protected] tmp]# du -hs
436M .

Not everything is going to be functional, which you’ll quickly discover. Linux expects things like /proc/ and /sys/ to be around, and when they aren’t, it gets grumpy. The obvious problem is that if you extend /proc/ and /sys/ into the chrooted environment, you expose the outside OS to the container…probably not what you were going for.

It turns out to be very hard to secure a straight chroot. BSD fixed this in the kernel by using jails, which go a lot farther in isolating the instance, but for a long time, there wasn’t an equivalent in Linux, which made escaping from a chroot jail something of a hobby for some people. Fortunately for our purposes, new tools and techniques were developed for Linux that went a long way to fixing this. Unfortunately, a lot of the old tools that you and I grew up with aren’t future compatible and are going to have to go away.

Enter the concept of cgroups. Cgroups are in-kernel walls that can be erected to limit resources for a group of processes. They also allow for prioritization, accounting, and control of processes, and paved the way for a lot of other features that containers take advantage of, like namespaces (think cgroups, but for networks or processes themselves, so each container gets its own network interface, say, but doesn’t have the ability to spy on its neighbors running on the same machine, or where each container can have its own PID 0, which isn’t possible with old chroot environments).

You can see that with containers, there are a lot of benefits, and thanks to modern kernel architecture, we lose a lot of the drawbacks we used to have. This is the reason that containers are so hot right now. I can spin up hundreds of docker containers in the time that it takes my fastest Vagrant image to boot.

Docker. I keep saying Docker. What’s Docker?

Well, it’s one tool for managing containers. Remember all of the stuff we went through above to get the chroot environment set up? We had to make a directory, force-install an RPM that could then tell yum what OS to install, then we had to actually have yum install the OS, and then we had to set up root’s skel so that we had aliases and all of that stuff. And after all of that work, we didn’t even have a machine that did anything. Wouldn’t it be nice if there was a way to just say the equivalent of “Hey! Make me a new machine!”? Enter docker.

Docker is a tool to manage containers. It manages the images, the instances of them, and overall, does a lot for you. It’s pretty easy to get started, too. In the CentOS machine you spun up above, just run

# yum install docker -y
# service docker start

Docker will install, and then it’ll start up the docker daemon that runs in the background, keeping tabs on instances.

At this point, you should be able to run Docker. Test it by running the hello-world instance:

[[email protected] ~]# docker run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from
a8219747be10: Pull complete
91c95931e552: Already exists The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd18681cf5daeb82aab55838d
Status: Downloaded newer image for
Usage of loopback devices is strongly discouraged for production use. Either use `–storage-opt dm.thinpooldev` or use `–storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Hello from Docker.
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

For more examples and ideas, visit:

The message is pretty explanatory, but you can see how it understood that you asked for something that it didn’t immediately know about, searched the repository at the Docker Hub to find it, pulled it down, and ran it.

There’s a wide world of Docker out there (start with the Docker docs, for instance, then maybe read Nathan LeClaire’s blog, among others).

And Docker is just the beginning. Docker’s backend uses something called “libcontainer” (actually now or soon to be runc, of the Open Container format), but it migrated to that from LXC, another set of tools and API for manipulating the kernel to create containers. And then there’s Kubernetes, which you can get started with on about a dozen platforms pretty easily.

Just remember to shut down and destroy your Digital Ocean droplet when you’re done, and in all likelihood, you’ve spent less than a quarter to have a fast machine with lots of bandwidth to serve as a really convenient learning lab. This is why I love Digital Ocean for stuff like this!

In wrapping this up, I feel like I should add this: Don’t get overwhelmed. There’s a TON of new stuff out there, and there’s new stuff coming up constantly. Don’t feel like you have to keep up with all of it, because that’s not possible to do while maintaining the rest of your life’s balance. Pick something, and learn how it works. Don’t worry about the rest of it – once you’ve learned something, you can use that as a springboard to understanding how the next thing works. They all sort of follow the same pattern, and they’re all working toward the same goal of rapidly-available short-lived service providing instances. They use different formats, backends, and so on, but it’s not important to master all of them, and feeling overwhelmed is a normal response to a situation like this.

Just take it one step at a time and try to have fun with it. Learning should be enjoyable, so don’t let it get you down. Comment below and let me know what you think of Docker (or Kubernetes, or whatever your favorite new tech is).

A blog for IT Admins who do everything by an IT Admin who does everything