#SysChat discussion tonight on Twitter, Webcast on Friday

Date November 18, 2014

Well, this is new and interesting.

SolarWinds approached me a little while ago and asked me if I'd be interested in taking part in a combination Twitter discussion and webcast, talking about system administration, monitoring, and so on. I said yes, but what I really meant was, "You've seen my twitter stream...I was probably going to be talking about that stuff anyway, so sure!"

The details are that the twitter discussion will be using the #syschat hashtag. The idea is for this to be an interactive thing, with other twittererrers...twittites....twitterati...whatever, with other folks taking part. So if you see me tweeting with that hashtag, don't be surprised, and join in!

Watching a hashtag on Twitter is tough with only twitter.com, and I find even HootSuite or Tweet deck slow for these kinds of things, even though I use Hootsuite all the time. Whenever I'm watching a realtime thing like this (or, during launch events on the #nasa tag), I will use TwitterFall, which has a realtime display that kind of rains tweets down it, and since you're signed in, you can reply and retweet and interact the same way that you would on anything else. Once you're signed in, on the left side, just add "#syschat" to the searches, and you'll immediately get all of the past tweets falling.

Tonight's event is at 8:30pm Eastern time, and it'll probably last an hour or an hour and a half. And honestly, it's not like I ever get OFF of the internet, so I'll probably be watching for messages for the next day or so.

The event will be co-hosted by SolarWinds Head Geek LawrenceGarvin.

The webcast will be happening on Friday, 12pm-1pm. It's going to be a presentation on the topic of "Evolution of SysAdmin" and "Holistic Monitoring for Servers and Applications". You need to sign up to watch live, but it's free.

I want to be very up front about this: SolarWinds IS paying me to do these things, but at no point have they ever even suggested that I add mentions of SolarWinds products, or asked me to even say nice things about them, or about the company in general. The talk isn't even going to be about SolarWinds - it's on the topic of system administration, and questions about the kinds of things we deal with. This isn't me advocating for SolarWinds, because I can't. I've never had an environment where it made sense to use their products.

But I do thank them for asking me to take part in this thing. I have to imagine that being paid to tweet about things I would have talked about anyway is a little like how comedians feel when they get paid to host a stand-up special. I just want to say something like, "and you thought all that time tweeting was a waste. Ha!"

Anyway, see you today at 8:30pm ET, and Friday at noon. Don't forget to sign up!

New Technologies to Study from #LISA14

Date November 14, 2014

Well, I'm just now ending my first LISA as part of the program committee. This has been a long, long process (over a year, actually!) and I'm exhausted, but really happy that we had an awesome team and made a great conference come together. We had the highest attendance at a LISA conference since 2007 (over 1,100 people attended), even though AWS ReInvent was scheduled for the same week.

I wanted to take a few moments to reflect on some of the most talked about technologies, to give me some things to work on in the next year.

I need to improve my metrics.
Last year, I dedicated myself to spending the next 12 months increasing my visibility into the infrastructure, and to getting graphite up and running and watching what happened throughout my organization. Well, I succeed, to some extent, but this conference was a reminder that I'm no where near done.

Starting with the Statistics for Ops and R for SysAdmins classes, then later in the week when Theo Schlossnagle presented The Math Behind Bad Behaviors, I began to understand that monitoring isn't just pretty graphs and packet counts. In order to actually take full advantage of metrics, you have to have the right metrics. I'm getting what amounts to derivatives rather than the underlying measurements of the movements of my disks, network ports, and so on. I'm going to stop getting the 10,000ft view stats and start getting the ground level, base information.

Hadoop. Build one.
I have been hearing the word Hadoop for the past few years, but my sentiment on the subject was, "I don't need MapReduce. I don't even know what I would use it for. Why would I install Hadoop?"

Well, it turns out that I'm kind of dumb. Yes, MapReduce is a part of Hadoop, but it's hardly the only part. In fact, there are a lot of solutions both included in Hadoop and tightly coupled to it, such as a distributed file system (HDFS), distributed databases (HBASE, HIVE), and really, a ton of other things.

One of the things that uses Hadoop is OpenTSDB, which is a time series database, similar to RRDtool or Whisper (from Graphite). It can write with millisecond precision, so it's useful for monitoring low-latency IO (or anything else low latency, for that matter). Anyway, I want to start using it rather than graphite (partially because of the discrete resolution, and partially because it's not a lossy roll-up like RRD or Whisper, where you store long-term metrics in a lower resolution than they were captured, which makes it practically impossible to do a lot of correlations that would be interesting to use, as I learn more about how to do that).

Once I have OpenTSDB running, I can work on getting a cool new tool running, called Bosun. Invented by my friend Kyle Brandt at Stack Exchange, Bosun is a bit like an IDE or maybe like a DSL for alerting. It runs on OpenTSDB on the back end, so there's definitely a requirement change to make this work (although they have a docker instance to make it easier to learn, I've been told that you really don't want to run that for your production instance because, in Kyle's words, "You'll have a bad time".

Cool Tools
I ALWAYS come back from LISA with a list of cool tools that I want to check out. I usually forget about them. Now, I'm just going to write them down so you can play, too.

  • FTrace
  • Ftrace is the Linux kernel internal tracer that was included in the Linux kernel in 2.6.27. Although Ftrace is named after the function tracer it also includes many more functionalities. But the function tracer is the part of Ftrace that makes it unique as you can trace almost any function in the kernel and with dynamic Ftrace, it has no overhead when not enabled.

    It's deeply integrated by Brendon Gregg's Perf Tools, which is something else I want to get proficient with. Check out more info here.

  • Vdbench
  • A disk IO benchmarking tool, now by Oracle.

  • OIP
  • Super-sweet real-time network traffic visualizer that I saw in action at the LISA Labs. They actually tracked down a compromised machine while I stood there watching the cool animation.

  • Docker and Toys
  • Docker is a great interface for dealing with Linux Containers, and it makes it easy to spin up lightweight application deployment and more. There's an entire ecosystem of stuff to go with it, too.

    • Flannel
    • An etcd backed overlay network for containers

    • Pipework
    • SDN for Linux containers

    • Project Atomic
    • Somewhere between a collection of tools and a framework for managing Docker containers.

    • OSTree
    • OSTree is a tool for managing bootable, immutable, versioned filesystem trees. It is not a package system; nor is it a tool for managing full disk images. Instead, it sits between those levels, offering a blend of the advantages (and disadvantages) of both.

    • Kubernetes
    • Clustered container management. You can apparently live-migrate containers between clustered machines. Cool, right?

  • iRODS
  • Open-source object data store. Abstracts data services from data storage to facilitate executing services across heterogeneous, distributed storage systems

  • OVirt
  • Web-based Linux KVM administrative interface. Think vSphere, but for Linux. Kind of.

So there you go. There are undoubtedly more things that I heard of but have already forgotten - sorry about that. This should keep you and I both busy for the immediate future, though.

Go, play, have a blast. I know I'm going to. But first, I'm going to get some sleep.

#NASASocial Intro to the First Orion Flight - EFT-1

Date November 5, 2014

This morning, I’m going to start a short series that discusses the technology behind the Orion spacecraft as it will be launched on its test flight, and the capabilities and flight profile.

For some background, Orion is the craft that is designed to hold and support a crew of 6 on a long-term mission in space far away from Earth. Ever since our last trip to the moon in 1972 (with Apollo 17), all of our manned missions in space, from the Space Shuttle to the Russian Soyuz launches to the International Space Station, have been to low earth orbit (LEO). The farthest we’ve gone was the Space Shuttle mission STS-82, to repair the Hubble Space Telescope in 1997.

The distances in space are huge. As Douglas Adams said, "you may think it's a long way down the road to the chemist's, but that's just peanuts to space". Here’s a picture that might help illustrate how far away things are:

The red circle indicates the area considered "Low Earth Orbit". Way over on the right is the moon. This image is to scale.

It’s relatively easy to send unmanned space probes to the far reaches of the solar system (and remember, I said relatively easy, not easy). Space probes can be pretty light. They can run on solar energy or nuclear energy. And unlike manned missions, they don’t need to store water, food, waste, oxygen, carbon dioxide scrubbers, and they don’t have to have to have backup systems for each of those things. A manned mission is exponentially harder than one that’s unmanned, and the longer you have to provide support for humans, the harder it gets.

If you’re lucky enough to have been at a museum which houses space capsules, such as the Smithsonian Air and Space in Washington, you’ve probably seen capsules from the Mercury or Gemini programs. Small, cramped pods which fit one person (barely), or maybe you’ve seen the larger Apollo capsules, which held a return crew of three.

The Orion craft performs largely the same function, but its size is greatly increased because of the larger crew that it needs to support, and the length of time that the crew will be onboard. Here’s a diagram of the comparative sizes:

Given that the distance between Earth and Mars varies from 54 million km to 401 million km, and a typical round trip time is projected to take years, you are probably looking at that picture thinking, "There’s no way that I would be in that sardine can for for that long". And you’d be right. For longer Orion missions, the idea is that the capsule will dock to a "Deep Space Habitat, which includes living facilities that allow the astronauts to be more comfortable for the duration. Here’s an artist’s projection:

I believe (but can't currently find evidence) that external habitat will be more lightly constructed than Orion, so when the astronauts need additional protection from incoming coronal mass ejections, they can take refuge in the more heavily shielded pod until the storm passes. It's possible that the plans are currently to create a more robust habitat that has adequate shielding (likely using fresh/waste water).

The Earth is shielded from solar radiation in part by the Van Allen Radiation Belts. This donut-shaped field around the earth is caused by the huge spinning ball of metal at the core of our planet. This spinning creates a magnetic field, which extends into the space around the planet and redirects incoming radiation onto the polar regions, which we know as the aurora borealis in the north, and aurora australis, in the south.

Anyway, we want to see how the capsule does as it passes through these layers of high radiation, so we’re sending the Orion capsule 5,800 kilometers away, which is roughly where the inner-most, strongest layer of the radiation ends. The truth is, we’re still gaining information on how deep space radiation from the sun will affect the Orion vehicle (and the people inside it). That’s one of the reasons that the orbital profile for the first test launch is so elliptical.

To get all of this stuff into space, we need a launch vehicle capable of lifting it into orbit and beyond. For that task, NASA is developing the Space Launch System (SLS), a powerful rocket system which has a base not unlike that of the space shuttle’s main fuel tank and dual solid rocket boosters. A while back, NASA released this video of how a launch will look. I think it’s pretty sweet:

For this test flight, and until the SLS lifter is built, NASA will be testing by using the largest rocket that we’ve got, the Delta IV Heavy. This lifter features three massive fuel tanks pushed by three enormous engines, with 6,280 kilonewtons of thrust. Although the Saturn V was capable of lifting more, this rocket will do what we need it to, until the SLS is done. (And actually, they just completed a test of the SLS main engines at Stennis Space Center. It sounds awesome!).

The Exploration Flight Test 1 (EFT-1) will include two orbits of Earth. After the launch at Cape Canaveral, Florida, the rocket will ascend to around 250 miles and complete a full orbit while testing its systems, then the final stage of the rocket will fire again, pushing the other side of the orbit up to 3,600 miles (10x higher than the Hubble repair mission I mentioned above). The rocket will then re-enter the atmosphere, slowing itself using the largest heat shield ever constructed, and once the atmosphere is thicker, drogue parachutes will deploy, then main chutes will finish slowing the capsule down until it splashes into the Pacific ocean.

All of this will happen over the course of four hours. When it’s over, we’ll have reams of data, the capsule will have traveled tens of thousands of miles, and humanity will be one step closer to being a multi-planet species. I can’t wait.

Checking things over before LISA

Date November 4, 2014

Tomorrow is my last day in the office for a week and a half, so I'm going through the various things that I manage, making sure that stuff is going to be alright while I'm away. You see, next week, I'm at LISA'14 in Seattle, and Wednesday afternoon, I'm flying to Chicago.

"But Matt", you rightly say, "Chicago isn't Seattle. I've seen a map. And that's not close enough to walk, even if you wanted to". Well, you make a good point. That's why I'm not going to be walking. I'm going to be doing something much cooler. Well, "cool" depending on who you are, I suppose. I suspect you'll think it's cool, though, which is why I'm telling you.

Here's my cross-country chariot:

The AmTrak Empire Builder

Yep, I am going to be the absolute envy of my five-year-old self. I've wanted to do this ever since I was a kid reading my fold-out train-shaped AmTrak propaganda.

The Empire Builder is a 2,000+ mile journey that crosses the Mississippi River, winds through the northern Great Plains, the Montana Rocky Mountains, Glacier National Park, and countless other interesting sights. The trip takes 44 hours, so my wife and I have reserved a "Roomette", which is french for "closet with two beds". But still, we can sleep laying down, so that's good.

Apparently, we will have electricity, but there's no internet on the cross country trains, so from my perspective, that's great. I want some down-time to spend reading and writing anyway. My wife is taking part in NaNoWriMo, so I suspect she'll feel the same way. So if you hit me up on twitter, don't expect a reply until Saturday.

So now, back to work and the shoring up of the moving bits. Battaning down my particular hatches is much better than it used to be, since I'm part of a team now, but still, I'd feel bad if I stuck one of my coworkers with a faulty <anything> and left on a train for a few days.

By the way, if you're going to LISA, I'll be getting there on Saturday evening. Make sure to say hello if you see me!

(Oh, as an aside. Last night, I was a guest on the excellent VU-PaaS podcast with Gurusimran Khalsa, Chris Wahl, and Josh Atwell. Watch their website for when that goes live. I'll post something here, of course, but in the meantime, they'v got 20-some episodes that you can listen to if you're interested in virtualization. It's a good use of your time, because they're funny guys, and smart.)

Recalculating Odds of RAID5 URE Failure

Date November 1, 2014

Alright, my normal RAID-5 caveats stand here. Pretty much every RAID level other than 0 is better than a single parity RAID, until RAID goes away. If you care about your data and speed, go with RAID-10. If you're cheap, go with RAID-6. If you're cheap and you're on antique hardware, or if you just like arguing about bits, keep reading about RAID-5.

I got into an argument discussion with the Linux admin at work (who is actually my good friend, and if you can't argue with your friends, who can you argue with?). What sucks is that he was right, and I was wrong. I think I hate being wrong more than, well, pretty much anything I can think of. So after proving to myself that I was wrong, I figured I'd write a blog entry to help other people who are in the same boat as me. Then you can get in arguments with your friends, but you'll be right. And being right is way better.

So, first some groundwork. Hard drives usually write data and are then able to read it back. But not always. There is such a thing as an "Unrecoverable Read Error", or URE. The likelihood of this happening is called the URE Rate, and is published in bits read. For instance, the super cheap hard drives that you get in your NewEgg or Microcenter ads probably have a URE rate of 1 in 10^14 bits. As Google will tell us, that's 12.5 terabytes.

That sounds like a lot of bytes until you realize that it's 2014, and you can totally buy a 6TB hard drive. So if you've got a home NAS (like this sweet looking 4-bay NAS backing a media center, you have 24TB worth of hard drives sitting there. What a wonderful world we live in.

The popular argument by me until now, is that as your RAID array approaches the number of bits stated in the URE, you approach a virtual certainty that you will lose data during a rebuild. And that's kind of true, but only after you go past the array size by a bit, and mathematically, it's not what I thought it would look like at all. That's what I'm here to explain.

Common sense says, "If I have a 12 TB array, and the URE rate for the drives in the array are 1 in 10^14, then I'm almost guaranteed to experience a URE when reading all the data.". The math tells the story differently, though.

The big assumption here is that a URE is an independent event, and I believe that to be the case. It's like dice, rather than bingo. That is, if you have 65 dice, each with 65, and you roll them all at the same time, how likely is it that there is at least one die that rolled a 1? Around 63%, which I'll explain shortly. Now, take 65 bingo balls, and pull 65 balls out of the basket. How likely is it that you pulled the #1 ball? 100%. See the difference? No one is saying, "ok, we've almost written the entire hard drive, time to throw in an error". So the assumption is that it happens randomly, and that it happens at the rate published by the manufacturer. Those are the only numbers I have, so that's what we're going to use. (Also, if you have evidence that they do not happen randomly, and that they are, in fact, dependent on the amount of data written, then please comment with a link to the evidence. I'd love to see it!).

The equation that we care about is this:



Probability = 1-((X-1)/X)^R

where X is the number of outcomes, and R is the number of trials. So, if we roll a normal die (a D6) once, what are the odds that it comes up 6? The equation looks like this:

1-(5/6)^1

(That is, there are 5 sides that AREN'T the one we're looking for, and we're rolling it once). That equation comes out to 0.16666..., so there is a 16% chance that the number we roll is a 6. Since we're only rolling once, an easier way is to say that we have a 1 in 6 chance, or 1/6, which also comes out to 0.16666... so the math checks out.

Lets complicate things by rolling the die 6 times. This is a simple enough experiment (and you probably have enough experience with Risk that you intuitively know that your odds are not 1 in 1 - you KNOW there's a chance that you won't roll a six out of all 6 dice. SO lets use the equation:

1-(5/6)^6

The only difference is that, instead of rolling it once, we're rolling it six. Google will tell us that the odds of rolling at least one six is about 66.5%.

You should start having a creeping suspicion by this point that the likelihood of 10^14 bits being read in a hard drive doesn't mean that there's a guaranteed failure, even if the rate is 1 in 10^14, but the difference is probably still bigger than you imagine. Lets do the equation:

1-(((10^14-1)/10^14)^10^14

What we get is 0.63182.... This is actually very, very close to an interesting number: 1-1/e (e is Euler's number, for the people who, like myself, sadly, are not mathematically inclined). As you keep increasing die sizes, from 6 to 6 billion, if you roll the die a number of times equal to the number of sides, the odds of a single side popping up get closer to 1-1/e. Weird, huh?

So, does that mean the odds of failure during a RAID array will never get worse than 63%? No way. Remember, that's if the number of sides remain equal to the number of rolls. But as the RAID array size increases, the odds increase, too.

Lets look at the 4-bay NAS that I linked to, and the 6TB hard drives, because who wouldn't want that? The scenario is that we have 4 6TB drives, and one of them has died. That means that when we replace the drive, we will have to read every bit on the other 3 6TB drives in order to calculate the parity. 12TB in 9*10^13 bits. Lets plug that into the equation:

1-(((10^14-1)/10^14)^(9*10^13)

which comes to 0.5931..., so 59% chance of failure. Still worse chances than a coin flip, but definitely not a certainty.

How about if we had a 6 bay array? Well, during a drive failure, we'd be reading 5 drives worth, or 30 TB (if the drives are 6TB), and 30TB is 2.4e14. Lets try THAT equation:

1-(((10^14-1)/10^14)^(2.4*10^14)

90.9%. I'm not sure what your degree of certainty is, but I'd be getting nervous by this point. OK, well before this point. Another drive?

1-(((10^14-1)/10^14)^(2.88*10^14)

94%. It doesn't actually hit 99% until you get to 10 6TB drives. But really, how close do you need to go?

In conclusion, I just wanted to write about what I found, and that it was counter-intuitive, and I thought that the probabilities might help other people too. If you have questions, or if I did the math wrong, just comment below to let me know.

Also, I don't know if they're going to cover this, but at LISA this year, there is a a Statistics for Ops course and an R for SysAdmins course that you might check into, if you're interested in this sort of thing. I'm definitely going to be in those classes. Hopefully I'll see you there!

Repping for SysAdmins at the Space Coast! #NASASocial

Date October 30, 2014

Like most children, I was a huge fan of spaceships when I was little. I remember watching Space Shuttle launches on TV, and watching interviews with astronauts in school. I always wanted to go see a launch in person, but that was hard to do when you were a kid in West Virginia. As I got older, I might have found other interests, but I never lost my love of space, technology, sci-fi, and the merging of all of those things. When I took a year away from system administration, the first hobby I picked up was model rocketry. I didn't really see any other option, it was just natural.

Well, a while back, I saw a post from one of the NASA Social accounts about how they were inviting social media people to come to the initial test launch of the Orion spacecraft in December. I thought... "Hey, I'm a social media people...I should try to get into this!". I don't spend a TON of time talking about space-related activities here, since this is a system administration blog, but I do merge my interests as often as possible, like with my post on instrumentation, or Kerbal Space System Administration, and I understand that I'm not alone in having these two interests. I suspected that, if I were accepted to this program, that it would be of interest to my readers (meaning: you).

Well, this morning, I got the email. I'm accepted. How awesome is that?
(Hint: Very Awesome.)

So, at the beginning of December, I will be heading to Kennedy Space Center to attend a two-day event, where I'll get tours and talk to engineers and administrators, and get very up close and cozy with the space program, and see the Orion launch in person. Literally a lifelong dream. I'm so excited, you've got no idea. Really, you haven't. I'm not even sure it's hit me yet.

The code for this mission is EFT1, for Exploration Test Flight 1. This is the crew module that will take humanity to deep space. This test flight's profile is sending the capsule 5,800km (3,600 miles) into space (the International Space Station orbits at around 330km), then re-enter the atmosphere at 32,000km/h (20,000mph) and be slowed down through friction on its heat shield and 11 parachutes. The entire mission takes 4 hours.

If you follow me on any of my social media accounts, you can prepare to see a lot of space stuff soonish. If you're not interested, I'm sorry about that, and I won't take it personally if you re-follow later in December or next year. But you should stick around, because it's going to be a really fun trip. And I'm going to be blogging here as well, of course.

If you don't already, follow me on twitter, Facebook, Instagram, Flickr, and Google+. You can follow the conversation about this event by using the #NASASocial and #Orion hashtags.

So thank you all for reading my blog, for following me on social media, and for making it possible for me to do awesome things and share them with you. It's because of you that I can do stuff like this, and I'm eternally grateful. If you have any special requests on aspects to cover of this mission, or of my experiences, please comment below and let me know. I can't promise anything, but I can try to make it happen. Thanks again.