Tag Archives: racks

Welcome Back! Here are your toys!

I’ve returned to Boston from what was a pretty eventful vacation in South Carolina. While I was gone, a ton of toys arrived and are just waiting for me to play with them.

My core switch is kind of sad. It’s an antiquated 6509, which is End of Life’d. Also, I’ve got 21 racks and one switch, meaning every bit of network connectivity everywhere is a really long patch cable to the other machine. Yuck.

I’m taking the first step to really fixing that now. Some of the toys waiting for me was a new Nexus 5548 switch and two FEX 2248 fabric extenders. I’ll be getting another 5548 soon, along with four more 2248. This will allow me to have better-than-end-of-row switches, which is a dramatic improvement.

There are a lot of options right now in the datacenter switch market (at least, there are several choices which don’t involve Cisco), but everything at a university is dependent on Layer 8 designs. Central IS uses these for core switching (actually, I think they use the 7000-series), but since they’re kicking money in for the goods…guess what I got? Right.

I’ve never played with an NX-OS based switch, so I picked up a book on NX-OS and Cisco Nexus Switching. At first glance, it seems exactly like configuring a Cisco switch always has. I suspect that the XML interface will prove much more useful, though. If any of you have good workflows for using this, I’m all ears!

Also, the week before last, we got in some other toys that I ordered. Our current racks are 48u telco racks…really shoddy in design and execution, it’s a mishmosh between square hole and round hole, the rack units aren’t numbered or even divided, and the holes aren’t accurately drilled. Again, yuck.

So as part of my overhaul budget, I’m getting 5-6 new racks a year, and the first set got here. After looking at the APC NetShelter line (which I really think is the gold standard for racks these days), I ended up going with Dell’s PowerEdge 4220 line. They’re comparable in feature set and were several hundred dollars cheaper with the academic discount we got. Plus,they had the added benefit of being able to arrive in the office before the end of the fiscal year, which is absolutely and concretely necessary. Because of reasons.

I’ve been gone for a week, and I feel bad about leaving the new racks to my coworkers, but I suspect they probably had some fun. I haven’t had a chance to check them out yet, but I’m really looking forward to improving the situation in the server room. As things change, I’ll make sure to talk about it. Thanks for reading!

The god of storage hates me, I know it

It seems like storage and I never get along. There’s always some difficulty somewhere. It’s always that I don’t have enough, or I don’t have enough where I need it, and there’s always the occasional sorry-we-sold-you-a-single-controller followed by I’ll-overnight-you-another-one which appears to be concluded by sorry-it-won’t-be-there-until-next-week. /sigh

So yes, looking back at my blog’s RSS feed, it was Wednesday of last week that I discovered the problem was the lack of a 2nd storage controller, and it was that same day that we ordered another controller. We asked for it to be overnighted. Apparently overnight is 6 days later, because it should come today. I mean, theoretically, it might not, but hey, I’m an optimist. Really.

Assuming that it does come today, I’m driving to Philadelphia to install it into the chassis. If it doesn’t come, I’m driving to Philadelphia to install another server into the rack, because we promised operations that they’d have a working environment by Wednesday, then I’m going again whenever the part comes.

In almost-offtopic news, I am quickly becoming a proponent of the “skip a rack unit between equipment” school of rack management. You see, there are people like me who shove all of the equipment together so that they can maintain a chunk of extra free space in the rack in case something big comes along. Then there are people who say that airflow and heat dissipation are no good when the servers are like that, so they leave one rack unit between their equipment.

I’ve got blades, so skipping a RU wouldn’t do much for my heat dissipation, but my 2nd processor kit is coming with a 1u pair of battery backups for the storage array and I REALLY wish that I hadn’t put the array on the bottom of the rack and left the nearest free space about 15 units above it. I’m going to have to do some rearranging, and I’m not sure what I can move yet.

Howto: Racks and rackmounting

I’m going to start a special feature on Fridays. It’s going to be sharing the sorts of tips that systems admins need to know, but can’t learn in a book. There are so many things that you learn on the job, figure out on your own, or run across on the net which make you realize that you’ve been doing something wrong for years. Sometimes you learn about things that you might have had no clue about. For instance, I just found out that you can do snapshots with LVM

Anyway, this Friday, I’m going to be showing you what I know about server racks.

I started out on a network that had a bunch of tower machines on industrial shelves; the sort you pick up at Harbor Freight or Big Lots. When we moved to racks and rackmount servers, it was like a whole new world.


The first difference is form-factor. Tower servers are usually rated by the “tower” descriptive. Full tower, half tower, mid-tower. Rack Servers are sized according to ‘U’s, short for “Rack Unit”. It’s equivalent to 1 3/4 inches, so a 2U server is 3.5” tall. The standard width for rackmount servers is 19” across. Server racks vary in depth, between 23 and 36”, with deeper being more common.

Instead of shelves for each server, rack hardware holds the server in place, usually suspended by the sides of the machine. They allow the server to slide in and out, sometimes permitting the removal of the server’s cover to access internal components. Different manufacturers have different locking mechanisms to keep the servers in place, but all rack kits I’ve seen come with instructions.

To anchor the rack hardware (also known as rails) to the rack itself, a variety of methods have been implemented. There are two main types of rack. Round hole racks, seen at the left, require a special type of rack hardware. Much more common is square hole racks, which require the use of rack nuts. The rack nuts act as screw anchors to keep the hardware in place. Some server manufacturers have created specific rack hardware that fits most square hole racks, and don’t require the use of rack nuts. Dell’s “rapidrail” system is one with which I’m very familiar. Typically you get the option of which rail system you want when you purchase the system.

Installing the rack nuts is made easier with a specialized tool. I call it the “rack tool”, but I’m sure there’s another name. The rack nut is place with the inside edge clip in place, through the hole. The tool is inserted through the hole, grabs the outside clip, and then you pull the hook towards you. This pulls the outside clip to the front of the hole, securing the nut in place.

A typical server will require eight nuts, usually at the top and bottom of each rack unit, on the right and left sides, front and back. Each rack unit consists of three square holes, and a rack nut is put in the top and bottom of both the right and the left sides. Several pieces of networking equipment have space for four screws, but I’ve found that they stay in place fine with two. I can’t really recommend it for other people, but if you’re low on rack nuts, it’s better than letting the switches just sit there (and it almost always seems like you have fewer rack nuts than you need once your rack starts growing). If you only use two screws to hold in your networking equipment, make sure it’s the bottom two. The center of gravity of a rackmount switch is always behind the screws, so if the top screws hold it up, the bottom has a tendency to swing out, and that’s not good for your rack or your hardware.

While I’m on the subject of swtches, let me give you this piece of advice. Mount your switches in the rear of the rack. It seems obvious, but you have no idea how many people mount them on the front in the beginning because “it looks cooler” and then regrets it when they continually have to run cable through the rack to the front.

Once your rack starts to fill out, heat will become an issue. When you align your rack for your air conditioner, another bit of common sense that’s frequently ignored. Air goes into the servers through the front, and hot air leaves through the back. This means that when you cool your rack, you should point the AC towards the front of your rack, not the back.

Air comes in here… And leaves back here
And leaves here...

It’s probably not a stranger to anyone who’s used a computer, but the cables seem to have a mind of their own, and nowhere is it more apparent than a reasonably full server rack. Many higher-end solutions provide built-in cable management features, such as in-cabinet runs for power cables or network cables, swing arms for cabling runs, and various places to put tie-downs.

There is no end-all-be-all advice to rack management, but there are some tips I can give you from my own experience.

Use Velcro for cabling that is likely to change in the next year. Permanent or semi-permanent cabling can deal with plastic zipties, as long as they aren’t pulled too tight, but anytime you see yourself having to clip zipties to get access to a cable, use Velcro. It’s far too easy to accidentally snip an Ethernet cable in addition to the ziptie.

Your rackmount servers will, in many cases, come with cable management arms. Ignore them. Melt them down or throw them away, but all they’ve ever done for me is block heat from escaping out the back.

Label everything. That includes both ends of the wires. Do this for all wires, even power cables (or especially power cables). Write down which servers are powered by which power sources.

If you have a lot of similar servers, label the back of the servers too. Pulling the wrong wire from the wrong server is not my idea of a good time.

Keep your rack tool in a convenient, conspicuous spot. I ran a zip tie through the side of the rack, and hang mine there.

(Some photos were courtesy of Ronnie Garciavia Flickr)