I commented at the article, but after thinking about it, I suspected some of my readers here would be interested as well. As I mention, overallocation of resources is not anything new, and it’s not a matter of subterfuge as much as statistics. It’s the same math that powers whether a product is recalled:
amount of payout x number of payees <> amount of recall
If the number on the left is bigger, then the company does the recall.
Even though it’s “industry practice” in nearly every industry, it doesn’t make it “right”. IT is different in a lot of ways, and at certain times, there are very important things relying on the systems we admin. My friend Tom, who used to admin at a hospital, knows exactly what I mean.
Storagebod’s article does highlight some of the dangers, not necessarily in enterprise storage, but in the way we use it. Here’s the text of my comment on there.
“Just a Storage Guy” (other commenter) was right when he said that oversubscription happens all over, but not just in the datacenter. How many times have we seen service providers oversubscribe network links, airlines oversubscribe planes, and ticketmaster oversell venues?
I think it has become less of a problem of planning and more of a problem of statistics. In the analog world, are the benefits of oversubscription such that it is financially in our best interest to continue these practices, or will lash-back from the consumer be overbearing?
In the digital world, what is the statistical likelihood of the “perfect storm” happening, and most importantly, how does that statistic interrelate with our guaranteed uptime requirements? Of course, IT is the only field I know of where one-in-a-million occurrences happen every day.
In the end, it all boils down to this: “Risk Management” is not “Risk Elimination”.
Please share your thoughts (and practices, if possible). I’m interested to see where everyone stands.