#NASASocial Intro to the First Orion Flight - EFT-1

Date November 5, 2014

This morning, I’m going to start a short series that discusses the technology behind the Orion spacecraft as it will be launched on its test flight, and the capabilities and flight profile.

For some background, Orion is the craft that is designed to hold and support a crew of 6 on a long-term mission in space far away from Earth. Ever since our last trip to the moon in 1972 (with Apollo 17), all of our manned missions in space, from the Space Shuttle to the Russian Soyuz launches to the International Space Station, have been to low earth orbit (LEO). The farthest we’ve gone was the Space Shuttle mission STS-82, to repair the Hubble Space Telescope in 1997.

The distances in space are huge. As Douglas Adams said, "you may think it's a long way down the road to the chemist's, but that's just peanuts to space". Here’s a picture that might help illustrate how far away things are:

The red circle indicates the area considered "Low Earth Orbit". Way over on the right is the moon. This image is to scale.

It’s relatively easy to send unmanned space probes to the far reaches of the solar system (and remember, I said relatively easy, not easy). Space probes can be pretty light. They can run on solar energy or nuclear energy. And unlike manned missions, they don’t need to store water, food, waste, oxygen, carbon dioxide scrubbers, and they don’t have to have to have backup systems for each of those things. A manned mission is exponentially harder than one that’s unmanned, and the longer you have to provide support for humans, the harder it gets.

If you’re lucky enough to have been at a museum which houses space capsules, such as the Smithsonian Air and Space in Washington, you’ve probably seen capsules from the Mercury or Gemini programs. Small, cramped pods which fit one person (barely), or maybe you’ve seen the larger Apollo capsules, which held a return crew of three.

The Orion craft performs largely the same function, but its size is greatly increased because of the larger crew that it needs to support, and the length of time that the crew will be onboard. Here’s a diagram of the comparative sizes:

Given that the distance between Earth and Mars varies from 54 million km to 401 million km, and a typical round trip time is projected to take years, you are probably looking at that picture thinking, "There’s no way that I would be in that sardine can for for that long". And you’d be right. For longer Orion missions, the idea is that the capsule will dock to a "Deep Space Habitat, which includes living facilities that allow the astronauts to be more comfortable for the duration. Here’s an artist’s projection:

I believe (but can't currently find evidence) that external habitat will be more lightly constructed than Orion, so when the astronauts need additional protection from incoming coronal mass ejections, they can take refuge in the more heavily shielded pod until the storm passes. It's possible that the plans are currently to create a more robust habitat that has adequate shielding (likely using fresh/waste water).

The Earth is shielded from solar radiation in part by the Van Allen Radiation Belts. This donut-shaped field around the earth is caused by the huge spinning ball of metal at the core of our planet. This spinning creates a magnetic field, which extends into the space around the planet and redirects incoming radiation onto the polar regions, which we know as the aurora borealis in the north, and aurora australis, in the south.

Anyway, we want to see how the capsule does as it passes through these layers of high radiation, so we’re sending the Orion capsule 5,800 kilometers away, which is roughly where the inner-most, strongest layer of the radiation ends. The truth is, we’re still gaining information on how deep space radiation from the sun will affect the Orion vehicle (and the people inside it). That’s one of the reasons that the orbital profile for the first test launch is so elliptical.

To get all of this stuff into space, we need a launch vehicle capable of lifting it into orbit and beyond. For that task, NASA is developing the Space Launch System (SLS), a powerful rocket system which has a base not unlike that of the space shuttle’s main fuel tank and dual solid rocket boosters. A while back, NASA released this video of how a launch will look. I think it’s pretty sweet:

For this test flight, and until the SLS lifter is built, NASA will be testing by using the largest rocket that we’ve got, the Delta IV Heavy. This lifter features three massive fuel tanks pushed by three enormous engines, with 6,280 kilonewtons of thrust. Although the Saturn V was capable of lifting more, this rocket will do what we need it to, until the SLS is done. (And actually, they just completed a test of the SLS main engines at Stennis Space Center. It sounds awesome!).

The Exploration Flight Test 1 (EFT-1) will include two orbits of Earth. After the launch at Cape Canaveral, Florida, the rocket will ascend to around 250 miles and complete a full orbit while testing its systems, then the final stage of the rocket will fire again, pushing the other side of the orbit up to 3,600 miles (10x higher than the Hubble repair mission I mentioned above). The rocket will then re-enter the atmosphere, slowing itself using the largest heat shield ever constructed, and once the atmosphere is thicker, drogue parachutes will deploy, then main chutes will finish slowing the capsule down until it splashes into the Pacific ocean.

All of this will happen over the course of four hours. When it’s over, we’ll have reams of data, the capsule will have traveled tens of thousands of miles, and humanity will be one step closer to being a multi-planet species. I can’t wait.

Checking things over before LISA

Date November 4, 2014

Tomorrow is my last day in the office for a week and a half, so I'm going through the various things that I manage, making sure that stuff is going to be alright while I'm away. You see, next week, I'm at LISA'14 in Seattle, and Wednesday afternoon, I'm flying to Chicago.

"But Matt", you rightly say, "Chicago isn't Seattle. I've seen a map. And that's not close enough to walk, even if you wanted to". Well, you make a good point. That's why I'm not going to be walking. I'm going to be doing something much cooler. Well, "cool" depending on who you are, I suppose. I suspect you'll think it's cool, though, which is why I'm telling you.

Here's my cross-country chariot:

The AmTrak Empire Builder

Yep, I am going to be the absolute envy of my five-year-old self. I've wanted to do this ever since I was a kid reading my fold-out train-shaped AmTrak propaganda.

The Empire Builder is a 2,000+ mile journey that crosses the Mississippi River, winds through the northern Great Plains, the Montana Rocky Mountains, Glacier National Park, and countless other interesting sights. The trip takes 44 hours, so my wife and I have reserved a "Roomette", which is french for "closet with two beds". But still, we can sleep laying down, so that's good.

Apparently, we will have electricity, but there's no internet on the cross country trains, so from my perspective, that's great. I want some down-time to spend reading and writing anyway. My wife is taking part in NaNoWriMo, so I suspect she'll feel the same way. So if you hit me up on twitter, don't expect a reply until Saturday.

So now, back to work and the shoring up of the moving bits. Battaning down my particular hatches is much better than it used to be, since I'm part of a team now, but still, I'd feel bad if I stuck one of my coworkers with a faulty <anything> and left on a train for a few days.

By the way, if you're going to LISA, I'll be getting there on Saturday evening. Make sure to say hello if you see me!

(Oh, as an aside. Last night, I was a guest on the excellent VU-PaaS podcast with Gurusimran Khalsa, Chris Wahl, and Josh Atwell. Watch their website for when that goes live. I'll post something here, of course, but in the meantime, they'v got 20-some episodes that you can listen to if you're interested in virtualization. It's a good use of your time, because they're funny guys, and smart.)

Recalculating Odds of RAID5 URE Failure

Date November 1, 2014

Alright, my normal RAID-5 caveats stand here. Pretty much every RAID level other than 0 is better than a single parity RAID, until RAID goes away. If you care about your data and speed, go with RAID-10. If you're cheap, go with RAID-6. If you're cheap and you're on antique hardware, or if you just like arguing about bits, keep reading about RAID-5.

I got into an argument discussion with the Linux admin at work (who is actually my good friend, and if you can't argue with your friends, who can you argue with?). What sucks is that he was right, and I was wrong. I think I hate being wrong more than, well, pretty much anything I can think of. So after proving to myself that I was wrong, I figured I'd write a blog entry to help other people who are in the same boat as me. Then you can get in arguments with your friends, but you'll be right. And being right is way better.

So, first some groundwork. Hard drives usually write data and are then able to read it back. But not always. There is such a thing as an "Unrecoverable Read Error", or URE. The likelihood of this happening is called the URE Rate, and is published in bits read. For instance, the super cheap hard drives that you get in your NewEgg or Microcenter ads probably have a URE rate of 1 in 10^14 bits. As Google will tell us, that's 12.5 terabytes.

That sounds like a lot of bytes until you realize that it's 2014, and you can totally buy a 6TB hard drive. So if you've got a home NAS (like this sweet looking 4-bay NAS backing a media center, you have 24TB worth of hard drives sitting there. What a wonderful world we live in.

The popular argument by me until now, is that as your RAID array approaches the number of bits stated in the URE, you approach a virtual certainty that you will lose data during a rebuild. And that's kind of true, but only after you go past the array size by a bit, and mathematically, it's not what I thought it would look like at all. That's what I'm here to explain.

Common sense says, "If I have a 12 TB array, and the URE rate for the drives in the array are 1 in 10^14, then I'm almost guaranteed to experience a URE when reading all the data.". The math tells the story differently, though.

The big assumption here is that a URE is an independent event, and I believe that to be the case. It's like dice, rather than bingo. That is, if you have 65 dice, each with 65, and you roll them all at the same time, how likely is it that there is at least one die that rolled a 1? Around 63%, which I'll explain shortly. Now, take 65 bingo balls, and pull 65 balls out of the basket. How likely is it that you pulled the #1 ball? 100%. See the difference? No one is saying, "ok, we've almost written the entire hard drive, time to throw in an error". So the assumption is that it happens randomly, and that it happens at the rate published by the manufacturer. Those are the only numbers I have, so that's what we're going to use. (Also, if you have evidence that they do not happen randomly, and that they are, in fact, dependent on the amount of data written, then please comment with a link to the evidence. I'd love to see it!).

The equation that we care about is this:



Probability = 1-((X-1)/X)^R

where X is the number of outcomes, and R is the number of trials. So, if we roll a normal die (a D6) once, what are the odds that it comes up 6? The equation looks like this:

1-(5/6)^1

(That is, there are 5 sides that AREN'T the one we're looking for, and we're rolling it once). That equation comes out to 0.16666..., so there is a 16% chance that the number we roll is a 6. Since we're only rolling once, an easier way is to say that we have a 1 in 6 chance, or 1/6, which also comes out to 0.16666... so the math checks out.

Lets complicate things by rolling the die 6 times. This is a simple enough experiment (and you probably have enough experience with Risk that you intuitively know that your odds are not 1 in 1 - you KNOW there's a chance that you won't roll a six out of all 6 dice. SO lets use the equation:

1-(5/6)^6

The only difference is that, instead of rolling it once, we're rolling it six. Google will tell us that the odds of rolling at least one six is about 66.5%.

You should start having a creeping suspicion by this point that the likelihood of 10^14 bits being read in a hard drive doesn't mean that there's a guaranteed failure, even if the rate is 1 in 10^14, but the difference is probably still bigger than you imagine. Lets do the equation:

1-(((10^14-1)/10^14)^10^14

What we get is 0.63182.... This is actually very, very close to an interesting number: 1-1/e (e is Euler's number, for the people who, like myself, sadly, are not mathematically inclined). As you keep increasing die sizes, from 6 to 6 billion, if you roll the die a number of times equal to the number of sides, the odds of a single side popping up get closer to 1-1/e. Weird, huh?

So, does that mean the odds of failure during a RAID array will never get worse than 63%? No way. Remember, that's if the number of sides remain equal to the number of rolls. But as the RAID array size increases, the odds increase, too.

Lets look at the 4-bay NAS that I linked to, and the 6TB hard drives, because who wouldn't want that? The scenario is that we have 4 6TB drives, and one of them has died. That means that when we replace the drive, we will have to read every bit on the other 3 6TB drives in order to calculate the parity. 12TB in 9*10^13 bits. Lets plug that into the equation:

1-(((10^14-1)/10^14)^(9*10^13)

which comes to 0.5931..., so 59% chance of failure. Still worse chances than a coin flip, but definitely not a certainty.

How about if we had a 6 bay array? Well, during a drive failure, we'd be reading 5 drives worth, or 30 TB (if the drives are 6TB), and 30TB is 2.4e14. Lets try THAT equation:

1-(((10^14-1)/10^14)^(2.4*10^14)

90.9%. I'm not sure what your degree of certainty is, but I'd be getting nervous by this point. OK, well before this point. Another drive?

1-(((10^14-1)/10^14)^(2.88*10^14)

94%. It doesn't actually hit 99% until you get to 10 6TB drives. But really, how close do you need to go?

In conclusion, I just wanted to write about what I found, and that it was counter-intuitive, and I thought that the probabilities might help other people too. If you have questions, or if I did the math wrong, just comment below to let me know.

Also, I don't know if they're going to cover this, but at LISA this year, there is a a Statistics for Ops course and an R for SysAdmins course that you might check into, if you're interested in this sort of thing. I'm definitely going to be in those classes. Hopefully I'll see you there!

Repping for SysAdmins at the Space Coast! #NASASocial

Date October 30, 2014

Like most children, I was a huge fan of spaceships when I was little. I remember watching Space Shuttle launches on TV, and watching interviews with astronauts in school. I always wanted to go see a launch in person, but that was hard to do when you were a kid in West Virginia. As I got older, I might have found other interests, but I never lost my love of space, technology, sci-fi, and the merging of all of those things. When I took a year away from system administration, the first hobby I picked up was model rocketry. I didn't really see any other option, it was just natural.

Well, a while back, I saw a post from one of the NASA Social accounts about how they were inviting social media people to come to the initial test launch of the Orion spacecraft in December. I thought... "Hey, I'm a social media people...I should try to get into this!". I don't spend a TON of time talking about space-related activities here, since this is a system administration blog, but I do merge my interests as often as possible, like with my post on instrumentation, or Kerbal Space System Administration, and I understand that I'm not alone in having these two interests. I suspected that, if I were accepted to this program, that it would be of interest to my readers (meaning: you).

Well, this morning, I got the email. I'm accepted. How awesome is that?
(Hint: Very Awesome.)

So, at the beginning of December, I will be heading to Kennedy Space Center to attend a two-day event, where I'll get tours and talk to engineers and administrators, and get very up close and cozy with the space program, and see the Orion launch in person. Literally a lifelong dream. I'm so excited, you've got no idea. Really, you haven't. I'm not even sure it's hit me yet.

The code for this mission is EFT1, for Exploration Test Flight 1. This is the crew module that will take humanity to deep space. This test flight's profile is sending the capsule 5,800km (3,600 miles) into space (the International Space Station orbits at around 330km), then re-enter the atmosphere at 32,000km/h (20,000mph) and be slowed down through friction on its heat shield and 11 parachutes. The entire mission takes 4 hours.

If you follow me on any of my social media accounts, you can prepare to see a lot of space stuff soonish. If you're not interested, I'm sorry about that, and I won't take it personally if you re-follow later in December or next year. But you should stick around, because it's going to be a really fun trip. And I'm going to be blogging here as well, of course.

If you don't already, follow me on twitter, Facebook, Instagram, Flickr, and Google+. You can follow the conversation about this event by using the #NASASocial and #Orion hashtags.

So thank you all for reading my blog, for following me on social media, and for making it possible for me to do awesome things and share them with you. It's because of you that I can do stuff like this, and I'm eternally grateful. If you have any special requests on aspects to cover of this mission, or of my experiences, please comment below and let me know. I can't promise anything, but I can try to make it happen. Thanks again.

Appearance on @GeekWhisperers Podcast!

Date October 29, 2014

I was very happy to visit Austin, TX not long ago to speak at the Spiceworld Austin conference, held by Spiceworks. Conferences like that are a great place to meet awesome people you only talk to on the internet and to see old friends.

Part of the reason I was so excited to go was because John Mark Troyer had asked me if I wanted to take part in the Geek Whisperers Podcast. Who could say no?

With over 60 episodes to their name, Geek Whisperers fills an amazing niche of enterprise solutions, technical expertise, and vendor luminaries that spans every market that technology touches. Hosted by John, Amy "CommsNinja" Lewis, and fellow Bostonite Matt Brender (well, he lives in Cambridge, but that’s close, right?), they have been telling great tales and having a good time doing it for years. I’ve always respected their work, and I was absolutely touched that they wanted to have me on the show.

We met on the Tuesday of the conference and sat around for an hour talking about technology, tribes, and the progression of people and infrastructure. I had such a really good. time, and I hope they did, too.

You can listen to the full podcast on Geek-Whisperers.com or through iTunes or Stitcher.

Please comment below if you have any questions about things we discussed. Thanks for listening!

Accidental DoS during an intentional DoS

Date October 24, 2014

Funny, I remember always liking DOS as a kid...

Anyway, on Tuesday, I took a day off, but ended up getting a call at home from my boss at 4:30pm or so. We were apparently causing a DoS attack, he said, and the upstream university had disabled our net connection. He was trying to conference in the central network (ITS) admins so we could figure out what was going on.

I sat down at my computer and was able to connect to my desktop at work, so the entire network wasn't shut down. It looked like what they had done was actually turn off out-bound DNS, which made me suspect that one of the machines on our network was performing a DNS reflection attack, but this was just a sign of my not thinking straight. If that had been the case, they would have shut down inbound DNS rather than outbound.

After talking with them, they saw that something on our network had been initiating a denial of service attack on DNS servers using hundreds of spoofed source IPs. Looking at graphite for that time, I suspect you'll agree when I say, "yep":

Initially, the malware was spoofing IPs from all kinds of IP ranges, not just things in our block. As it turns out, I didn't have the sanity check on my egress ACLs on my gateway that said, "nothing leaves that isn't in our IP block", which is my bad. As soon as I added that, a lot of the traffic died. Unfortunately, because the university uses private IP space in the 10.x.x.x range, I couldn't block that outbound. And, of course, the malware quickly caught up to speed and started exclusively using 10.x addresses to spoof from. So we got shut down again.

Over the course of a day, here's what the graph looked like:

Now, on the other side of the coin, I'm sure you're screaming "SHUT DOWN THE STUPID MACHINE DOING THIS", because I was too. The problem was that I couldn't find it. Mostly because of my own ineptitude, as we'll see.

Alright, it's clear from the graph above that there were some significant bits being thrown around. That should be easy to track. So, lets fire up graphite and figure out what's up.

Most of my really useful graphs are thanks to the ironically named Unhelpful Graphite Tip #6, where Jason Dixon describes the "mostDeviant" function, which is pure awesome. The idea is that, if you have a BUNCH of metrics, you probably can't see much useful information because there are so many lines. So instead, you probably want the few weirdest metrics out of that collection, and that's what you get. Here's how it works.

In the graphite box, set the time frame that you're looking for:

Then add the graph data that you're looking for. Wildcards are super-useful here. Since the uplink graph above is a lot of traffic going out of the switch (tx), I'm going to be looking for a lot of data coming into the switch (rx). The metric that I'll use is:


CCIS.systems.linux.Core*.snmp.if_octets-Ethernet*.rx

That metric, by itself, looks like this:

There's CLEARLY a lot going on there. So we'll apply the mostDeviant filter:

and we'll select the top 4 metrics. At this point, the metric line looks like this:


mostDeviant(4,CCIS.systems.linux.Core*.snmp.if_octets-Ethernet*.rx)

and the graph is much more manageable:

Plus, most usefully, now I have port numbers to investigate. Back to the hunt!

As it turns out, those two ports are running to...another switch. An old switch that isn't being used by more than a couple dozen hosts. It's destined for the scrap heap, and because of that, when I was setting up collectd to monitor the switches using the snmp plugin, I neglected to add this switch. You know, because I'm an idiot.

So, I quickly modified the collectd config and pushed the change up to the puppet server, then refreshed the puppet agent on the host that does snmp monitoring and started collecting metrics. Except that, at the moment, the attack had stopped...so it was a waiting game that might never actually happen again. As luck would have it, the attack started again, and I was able to trace it to a port:

Gotcha!

(notice how we actually WERE under attack when I started collecting metrics? It was just so tiny compared to the full on attack that we thought it might have been normal baseline behavior. Oops)

So, checking that port led to...a VM host. And again, I encountered a road block.

I've been having an issue with some of my VMware ESXi boxes where they will encounter occasional extreme disk latency and fall out of the cluster. There are a couple of knowledgebase articles ([1] [2]) that sort-of kind-of match the issue, but not entirely. In any event, I haven't ironed it out. The VMs are fine during the disconnected phase, and the fix is to restart the management agents through the console, which I was able to do and then I could manage the host again.

Once I could get a look, I could see that there wasn't a lot on that machine - around half a dozen VMs. Unfortunately, because the host had been disconnected from the vCenter Server, stats weren't being collected on the VMs, so we had to wait a little bit to figure out which one it was. But we finally did.

In the end, the culprit was a NetApp Balance appliance. There's even a knowledge base article on it being vulnerable to ShellShock. Oops. And why was that machine even available to the internet at large? Double oops.

I've snapshotted that machine and paused it. We'll probably have some of the infosec researchers do forensics on it, if they're interested, but that particular host wasn't even being used. VM cruft is real, folks.

Now, back to the actual problem...

The network uplink to the central network happens over a pair of 10Gb/s fiber links. According to the graph, you can see that the VM was pushing 100MB (800Mb/s). This is clearly Bad(tm), but it's not world-ending bad for the network, right? Right. Except...

Upstream of us, we are going through an in-line firewall (that, like OUR equipment, was not set to filter egress traffic based on spoofed source IPs - oops, but not for me, finally!). We are assigned to one of five virtual firewalls on that one physical piece of hardware...despite that, the actual physical piece of hardware has a limit of around a couple hundred thousand concurrent sessions.

For a network this size, that is probably(?) reasonable, but a session counts as a stream of packets between a source IP and a destination IP. Every time you change the source IP, you get a new session, and when you spoof thousands of source IPs...guess what? And since it's a per-physical-device limit, our one rogue VM managed to take out the resources of the big giant firewall.

In essence, this one intentional DoS attack on a couple of hosts in China successfully DoS'd our university as sheer collateral damage. Oops.

So, we're working on ways to fix things. A relatively simple step is to prevent egress traffic from IPs that aren't our own. This is done now. We've also been told that we need to block egress DNS traffic, except from known hosts or to public DNS servers. This is in place, but I really question its efficacy. So we're blocking DNS. There are a lot of other protocols that use UDP, too. NTP reflection attacks are a thing. Anyway, we're now blocking egress DNS and I've had to special-case a half-dozen research projects, but that's fine by me.

In terms of things that will make an actual difference, we're going to be re-evaluating the policies in place for putting VMs on publicly-accessible networks, and I think it's likely that there will need to be justification for providing external access to new resources, whereas in the past, it's just been the default to leave things open because we're a college, and that's what we do, I guess. I've never been a fan of that, from a security perspective, so I'm glad it's likely to change now.

So anyway, that's how my week has been. Fortunately, it's Friday, and my sign is up to "It has been [3] Days without a Network Apocalypse".