Kerbal Space System Administration

Date July 28, 2014

I came to an interesting cross-pollination of ideas yesterday while talking to my wife about what I'd been doing lately, and I thought you might find it interesting too.

I've been spending some time lately playing video games. In particular, I'm especially fond of Kerbal Space Program, a space simulation game where you play the role of spaceflight director of Kerbin, a planet populated by small, green, mostly dumb (but courageous) people known as Kerbals.

Initially the game was a pure sandbox, as in, "You're in a planetary system. Here are some parts. Go knock yourself out", but recent additions to the game include a career mode in which you explore the star system and collect "science" points for doing sensor scans, taking surface samples, and so on. It adds a nice "reason" to go do things, and I've been working on building out more efficient ways to collect science and get it back to Kerbin.

Part of the problem is that when you use your sensors, whether they detect gravity, temperature, or materials science, you often lose a large percentage of the data when you transmit it back, rather than deliver it in ships - and delivering things in ships is expensive.

There is an advanced science lab called the MPL-LG-2 which allows greater fidelity in transmitted data, so my recent work in the game has been to build science ships which consist of a "mothership" with a lab, and a smaller lightweight lander craft which can go around whatever body I'm orbiting and collect data to bring to the mothership. It's working pretty well.

At the same time, I'm working on building out a collectd infrastructure that can talk to my graphite installation. It's not as easy as I'd like because we're standardized on Ubuntu Precise, which only has collectd 4.x, and the write_graphite plugin began with collectd 5.1.

To give you background, collectd is a daemon that runs and collects information, usually from the local machine, but there are an array of plugins to collect data from any number of local or remote sources. You configure collectd to collect data, and you use a write_* plugin to get that data to somewhere that can do something with it.

It was in the middle of explaining these two things - KSP's science missions and collectd - that I saw the amusing parity between them. In essence, I'm deploying science ships around my infrastructure to make it easier to get science back to my central repository so that I advance my own technology. I really like how analogous they are.

I talked about doing the collectd work on twitter, and Martijn Heemels expressed interest in what I was doing, since he would also like write_graphite on Precise, so I figured that other people probably might want to get in on the action, so to speak. I could give you the package I made, or I could show you how I made it. That sounds more fun.

Like all good things, this project involves software from Jordan Sissel - namely fpm, effing package management. Ever had to make packages and deal with spec files, control files, or esoteric rulesets that made you go into therapy? Not anymore!

So first we need to install it, which is easy, because it's a gem:


$ sudo gem install fpm

Now, lets make a place to stage files before they get packaged:

$ mkdir ~/collectd-package

And grab the source tarball and untar it:

$ wget https://collectd.org/files/collectd-5.4.1.tar.gz
$ tar zxvf collectd-5.4.1.tar.gz
$ cd collectd-5.4.1/

(if you're reading this, make sure to go to collectd.org and get the new one, not the version I have listed here.)

Configure the Makefile, just like you did when you were a kid:

$ ./configure --enable-debug --enable-static --with-perl-bindings="PREFIX=/usr"

Hat tip to Mike Julian who let me know that you can't actually enable debugging in the collectd tool unless you actually use the flag here, so save yourself some heartbreak by turning that on. Also, if I'm going to be shipping this around, I want to make sure that it's compiled statically, and for whatever reason, I found that the perl bindings were sad unless I added that flag.

Now we compile:

$ make

Now we "install":

make DESTDIR="/home/YOURUSERNAME/collectd-package" install

I've found that the install script is very grumpy about relative directory names, so I appeased it by giving it the full path to where the things would be dropped (the directory we created earlier)

We're going to be using a slightly customized init script. I took this from the version that comes with the precise 4.x collectd installation and added a prefix variable that can be changed. We didn't change the installation directories above, so by default, everything is going to eventually wind up in /opt/collectd/ and the init script needs to know about that:

$ cd ~
$ mkdir -p collectd-package/etc/init.d/
$ wget --no-check-certificate -O collectd-package/etc/init.d/collectd http://bit.ly/1mUaB7G
$ chmod +x collectd-package/etc/init.d/collectd

This is pulling in the file from this gist.

Now, we're finally ready to create the package:

fakeroot fpm -t deb -C collectd-package/ --name collectd \
--version 5.4.1 --iteration 1 --depends libltdl7 -s dir opt/ usr/ etc/

Since you may not be familiar with fpm, some of the options are obvious, but for the ones that aren't, -C changes directory to the given argument, --version is the version of the software, as opposed to --iteration is the version of the package. If you package this, deploy it, then find a bug in the packaging, when you package it again after fixing the problem, you increment the iteration flag, and your package management can treat it as an upgrade. The --depends is a library that collectd needs on the end systems. -s sets the source type to "directory", and then we give it a list of directories to include (remembering that we've changed directories with the -C flag).

Also, this was my first foray into the world of fakeroot, which you should probably read about if you run Debian-based systems.

At this point, in the current directory, there should be "collectd_5.4.1-1.deb", a package file that works for installing using 'dpkg -i' or in a PPA or in a repo, if you have one of those.

Once collectd is installed, you'll probably want to configure it to talk to your graphite host. Just edit the config in /opt/collectd/etc/collectd.conf. Make sure to uncomment the write_graphite plugin line, and change the write_graphite section. Here's mine:


  
    Host "YOURGRAPHITESERVER"
    Port "2003"
    Protocol "tcp"
    LogSendErrors true
    # remember the trailing period in prefix
    #    otherwise you get CCIS.systems.linuxTHISHOSTNAME
    #    You'll probably want to change it anyway, because 
    #    this one is mine. ;-) 
    Prefix "CCIS.systems.linux."
    StoreRates true
    AlwaysAppendDS false
    EscapeCharacter "_"
  

Anyway, hopefully this helped you in some way. Building a puppet module is left as an exercise to the reader. I think I could do a simplistic one in about 5 minutes, but as soon as you want to intelligently decide which modules to enable and configure, then it gets significantly harder. Hey, knock yourself out! (and let me know if you come up with anything cool!)

  • Scott Frazer

    So you're saying that you're making a KSP mod to emit collectd information about your flights, right? :-)

  • Matt Simmons

    Oh man, that sounds awesome! Hrm.....

  • http://camcope.me Cam Cope

    I've recently learned how to make my own backport packages using PPA's. I much prefer that because the packages are built exactly the same way they are in the main ubuntu repositories, and I can easily re-backport the package for updates. I backported the 14.10 package of collected to 12.04 with this one command:

    # backportpackage -u ppa:ccope/collectd collectd -d precise -k AA1ED734

    To do the same, follow this guide up through "Uploading packages to your PPA", and then check the backportpackage manpage to figure out how to do things like pick a version to backport.

  • Matt Simmons

    That's really cool. Thanks! We briefly looked into setting up a PPA, but the all-public aspect of it kind of threw us. We ended up building a repo instead.

  • http://camcope.me Cam Cope

    Ah, yeah I'd just do this for publicly available packages, and still keep a private mirror for proprietary packages. If you want to actually build it locally and then upload it to your own repo, you can use the following commands (and this will work regardless what version of ubuntu you're running on the build machine):

    ~ $ pbuilder-dist precise create ## This creates a tarball of the ubuntu release for creating build chroot's, you only need to run it once per ubuntu version.
    ~ $ mkdir -p ~/packaging/collectd && cd $_
    ~/packaging/collectd $ backportpackage -s utopic -d precise -w . -b -B pbuilder-dist --dont-sign collectd ## -s utopic is optional if you want to backport the latest release. --dont-sign is optional if you want to sign with your pgp key.

    All of the build sources are saved in ~/packaging/collectd, and all of the deb files will be dropped inside that in the buildresult folder.

    With this method, you can backport between arbitrary releases, and it preserves all the ubuntu-specific patches (such as init scripts).

  • ramindk

    Why does everyone do this the hard way?

    Pull the src packages for collectd 5.4.0 from 14.04 Ubuntu. Rebuild on your 12.04 build server after fakeroot yells at you enough about missing deps. Copy new deb to your repo.

    It's a little more complicated if you can't simply rebuild, but the process is nearly the same. Pull src you want to build a package from. Dig around in debian/control, etc to remove specific version patches or whatever. Build on 12.04 build server. Copy new deb to your repo.

  • Johan Wastring

    Fantastic cross-pollination. So great that you tell about it. I get these quite often too but on another level. Thanks for sharing, much appreciated!

  • Pingback: Monitoring (old) Zimbra | Standalone Sysadmin()

  • Pingback: Accidental DoS during an intentional DoS | Standalone Sysadmin()