The god of storage hates me, I know it

Date June 2, 2009

It seems like storage and I never get along. There's always some difficulty somewhere. It's always that I don't have enough, or I don't have enough where I need it, and there's always the occasional sorry-we-sold-you-a-single-controller followed by I'll-overnight-you-another-one which appears to be concluded by sorry-it-won't-be-there-until-next-week. /sigh

So yes, looking back at my blog's RSS feed, it was Wednesday of last week that I discovered the problem was the lack of a 2nd storage controller, and it was that same day that we ordered another controller. We asked for it to be overnighted. Apparently overnight is 6 days later, because it should come today. I mean, theoretically, it might not, but hey, I'm an optimist. Really.

Assuming that it does come today, I'm driving to Philadelphia to install it into the chassis. If it doesn't come, I'm driving to Philadelphia to install another server into the rack, because we promised operations that they'd have a working environment by Wednesday, then I'm going again whenever the part comes.

In almost-offtopic news, I am quickly becoming a proponent of the "skip a rack unit between equipment" school of rack management. You see, there are people like me who shove all of the equipment together so that they can maintain a chunk of extra free space in the rack in case something big comes along. Then there are people who say that airflow and heat dissipation are no good when the servers are like that, so they leave one rack unit between their equipment.

I've got blades, so skipping a RU wouldn't do much for my heat dissipation, but my 2nd processor kit is coming with a 1u pair of battery backups for the storage array and I REALLY wish that I hadn't put the array on the bottom of the rack and left the nearest free space about 15 units above it. I'm going to have to do some rearranging, and I'm not sure what I can move yet.

  • Dan C

    Leaving 1U might have helped you by chance this time, but probably not the next ;)

    I'm not that keen on the approach. If your site has correctly maintained hot and cold corridors then the gaps just introduce the risk of your exhaust air spoiling the fresh intakes.

    Unless you also have a whole lot of 1U blankers.

  • Matt

    @Dan

    No, next time I'll probably need 2u ;-) Murphy and all....

    Thanks for the comment. Glad to know I'm not the only person out there who tries for density.

  • noneck

    Customer service is seriously lacking lately, especially with Dell , I had a storage controller go bad on a brand new 1950(From Diagnostic Error and BIOS Error), a week later we had every part shipped BUT a new controller, at which I requested a new server. They had me jump through some diagnostics and finally sent a rep out today to install/test a new mobo. He just left claiming that the "Perc Storage Controller on the Box is in fact Dead! Now I have to wait another 3 days for a part that I had originally requested be shipped out in the first place!

  • James

    I've always worked in a "closed-gap" environment, however I'm fond of the aesthetics of a 1U gap. I work in 2 data centers both with blades and traditional rackmounts and the environmental difference is minimal at best.

    I say, do what makes you feel good lol..

  • Aaron

    I leave a 1U gap for a couple reasons
    1.) We can't get dense enough to fully populate the rack due to watt/sqft restrictions
    2.) The cable management arms and general cable management seem to work better with the gap. Fewer tangles

    You shouldn't get a cooling benefit if your servers are properly designed.