IT’s value as a marketing device

I worked for and with a CEO at one time who I really respected as a person and as a business leader. There was one particular habit he had that never quite sat right with me, and that was wanting to publicize the technologies that we bought into some kind of marketing release.

Now, I like marketing as much as the next sysadmin. OK, that’s a lie – I actually don’t mind doing marketing. The problem is that he always wanted to publicize the really mundane purchases that we made. His take was, “we just spent thousands of dollars on new servers; lets try to get some return on them”.

I’m not sure that he really understood that the return was what the servers did, but I suppose you can’t fault him for trying to get some marketing. The issue was that we weren’t exactly buying world-class anythings. The biggest infrastructure migration I did involved replacing the entire server stack with recently EOL’d 1955 Poweredge Blades…not exactly something that you want to advertise, but the company was used to a shoestring budget, so paying $40,000 for two blade chassis full of machines, plus our first SAN storage array plus a new database server plus a (1) fibre channel switch may have seemed like a big deal on the CapEx line, but my boss and I did as much as we could to assure him that it wasn’t.

I was thinking about that situation not too long ago, and it occurred to me that the whole time I was downplaying the technology we used, I should have been talking up what we did with it. I’m not going to say I was cutting edge, but there were a few years where the service uptime was significantly above 4 9s, and I know of at least one where it was 5 9s. Granted, I’m not saying it couldn’t have gone down, but it didn’t. Ex post facto, you can declare whatever uptime you actually achieved.

The marketing line shouldn’t have been “we use worldclass hardware” (because we didn’t, and couldn’t claim that), it should have been “We do amazing things with technology”, which was true, because we did.

Am I alone here? Has your company ever asked you for pictures of the racks and information to do marketing with? How did you handle it?

Post-LISA Pre-Holiday Filler

LISA was pretty exceptionally late in the year this time, being only a couple of weeks before Christmas. The inevitable let-down of coming back to the “real world” where I’ve got to “work” and “do productive things” is always kind of a drag, but this year is a little different thanks to the proximity of the upcoming holiday plus the fact that I work for an .edu now.

Here at NEU, we actually have the entirety of next week off, plus we don’t come back until Wednesday the 2nd. Like, the whole week plus! I first heard about this back in August when I started, and I was like, “oh man! That’s great! We’re going to have so much time to do infrastructure work. We can take down the whole network and no one will care”. Because, you know, I’m crazy like that. My coworkers quickly mutinied against my ideas because, well, it’s a week off. I’m beginning to see the wisdom of their mentality, and I’ve made plans to head back to Ohio to visit with family for the Christmas break.

I still do kind of wish I could spend some time in the server room fixing things up, but I’ll take care of it next year. I have some pretty large plans, and since Amy and I will be taking the train to Pittsburgh (it was actually the cheapest way to get there – I can’t believe it either), I’ll have plenty of time to make plans and write. I’m really looking forward to the trip.

Here are some of the things I’m working on:

  1. VLAN Renumbering Project
  2. Our network design is actually pretty archaic. We’ve got several networks where desktops and servers are in the same subnet (what I call a hybrid network – and warn people against). I’m going to be dividing them up. Plus, we’ve got all kinds of subnets which have the same sort of security needs, but are in separated networks for no reason that I can discern.

    I’ve worked up a security zone “map” of all of the types of access that servers need (and need to provide), and I’ll spend some time on the train figuring out what logical grouping of servers, desktops, and appliances makes the most sense. I’m sick enough that I actually kind of like stuff like this.

  3. Server Room Rebuild
  4. We have the cheapest racks known to mankind. Well, ok, second cheapest – they do have four arms. But they’re really bad. They’re round-holed, have no cable management features, and the one attribute they have that I don’t hate is…wait, no, I don’t think there’s anything I don’t hate about them.

    As part of the three year budget estimate that I submitted, I included a request for a new set of racks in the server room. I also want to change the way that the server room is laid out. At the moment, we’ve got three rows of racks, and, well, the airflow is kind of interesting:

    Old Server Room Design

    It’s not just me – that’s crazy, right?

    Anyway, I’ve done a survey, and of the 882 rack units in that room, we’re using in the neighborhood of 350. Most of everything else is taken up by free space or by shelves holding up desktop machines turned into servers, most of which we don’t want to keep around anyway. So yeah, here’s what I want to do:


    I think that makes a lot more sense. We still have over 500 rack units of space, plus we get a much better airflow with less mixing, and we can use panels to further help separate the hot and cold aisles. It should be a lot more energy efficient. Plus, I can get rid of these damned round-hole telco racks. Yech.

  5. Network Core Upgrade
  6. Right now, our “network core” is a Cisco 6509. Not a 6509E, mind you, but an old-school 6500-series that has been EOL’d (stage 6 in that document). I see this as an opportunity.

    Not only is our 6509 our core, it’s also, in a large way, our distribution switch. Well, one of them, anyway. It’s stacked full of 48 port gigabit blades (including a couple of really crappy cards that don’t even support the crossbar). I want to fix this.

    My thought is that it makes a lot of sense to replace our one 6509 with two 6506Es, and use a “top of rack” (ToR) switch network design where we actually have a ToR switch every other rack or so, then wire every ToR switch to both 6506Es for failover. The number of ports we’ll have at our disposal is higher, we’ll have a more robust design, and unicorns will pop into existence ready to cater to our every whim.

  7. pfsense
  8. Related to the weird networking layout, we’ve got an array of firewall boxes, some pfsense, some bluesocket. Apparently, the bluesocket machines fail pretty frequently. I haven’t seen it yet, but I believe them. The replacement of the existing array into one or two (clustered, of course).

So that’s what I’m working on. I’d also like to get further into coding some scripts for AWS. I found out about a great python library called boto. It’s completely full-featured. The only drawback is that it’s written in python ;-)

As a side effect of doing some preliminary coding with boto, I’ve been working on IDEifying my vim. Those of you who use vim (and you should) may want to check into these plugins if you aren’t already: NERDTree, snipmate, and vim-surround. I’ll be doing a vim-specific post at some point, so if you have any awesome plugins, let me know in the comments.

That’s it for now. It’s a short week, so back to work!

My LISA12 Experience Thus Far

This is my fourth LISA conference, and it’s unique in my experience in that it is physically separated from the rest of the city by a decent distance. It’s not that we’re far, necessarily, but the Sheraton Marina in San Diego is situated on an artificial peninsula in the San Diego harbor, as you can see in the picture below:

It doesn’t seem like much, but I have noticed a few differences in activities around the conference, and I suspect it’s because of that. And it’s nothing specific, but I’ve noticed that more people are staying around the hotel when it comes to meal time.

Last year, the conference was in Boston, and there were dozens of restaurants within five minutes walking distance. Here in San Diego, the only thing I’ve heard of within reasonable walking distance is a deli at the marina. Granted, it’s a delicious deli, but I don’t think it’s open for dinner.

I actually did round up a group of folks and we took 25 people to the Gas Lamp District, home to some old architecture and a nice area.

The hotel has figured out that it’s the easiest place to eat at, too. The prices are surprisingly good, with pub fare going for around $15-$20 a plate. Sadly, I can’t really complain about that because the food is actually amazing. I don’t think I’ve ever had better onion rings, and someone I had lunch with earlier claimed that their shrimp cocktail was the best he’d ever had. So it’s expensive, but there’s a tradeoff.

Overall, the experience has been very, very rewarding, just as it always is. I know that I talk about LISA a lot, and so you might be jaded if you read this blog often, but I just now found out that a reader whom I interact with pretty regularly didn’t realize that LISA offers really valuable training, like Geoff Halprin’s First Hundred Days, which is a how-to guide for what to do during the first 100 days after joining a new company. It’s not just valuable, the training is unique. You just can’t get it anywhere else. You can see the full gamut of classes on this page. Just scroll through and check out the topics – it’s remarkable.

Overall, the conference is exhausting, but very fun. I’m including a slideshow below so you can see some photos I’ve taken. It’s an iframe, so if that doesn’t load, check out the page back at

My fellow bloggers and I have been writing extensively on the USENIX blog. You should check it out (and subscribe) here: