This is the first of the Tech Field Day Sponsor Overviews
Unless you live under a very large rock, you’re probably familiar with Hewlett-Packard. They are, and have been, one of the largest manufacturers of end user and enterprise machines for a long time. It’s sort of fitting that I’m covering them first, because the GestaltIT Tech Field Day grew from the HP Tech Day that they hosted in 2009.
As it turns out, Monday and Tuesday of this week was the HP Storageworks Tech Day, and several of my co-delegates were in attendance. Incidentally, if you’re into twitter, you can see the tweets with #HPStorageDay.
For starters, lets begin by looking at some of the technology on display at the storage day. Looking at the itinerary, we can definitely see some emphasis being placed on converged infrastructure, and then in the afternoon, some focus on Enterprise Virtual Arrays (EVAs), and then some focus on the X9000, along with the P2000 and the P4000. I don’t know about you, but I don’t have any HP gear, and the arbitrary numbers will take a little bit to get used to!
Converged Infrastructure. That’s a heck of a term, and pretty ambiguous. Fortunately, HP set up a Converged Infrastructure website, where we can figure out what they’re talking about, specifically.
A Converged Infrastructure enables you to rebalance this ratio by realigning today’s traditional technology silos into adaptive pools that can be shared by any application, optimized and managed as a service.
By transitioning away from this product-centric approach that has created unyielding IT sprawl to a shared-service management model, you can accelerate standardization, reduce operational costs and accelerate business results
I do have to wonder how hard they worked not to include the word “cloud” in that description.
From everything I can tell, they do essentially make a…sigh…cloud…out of your equipment. The underlying idea is that the Converged Infrastructure is really a layer of abstraction, which takes all of your assets and allows you to create “resource pools”. Although these resource pools are abstracted away from the specific hardware, it doesn’t sound as though just any old machines will do the trick:
HP Virtual Resource Pools are created from purpose-built systems able to create adaptive, shared capacity that can be combined, divided and repurposed to match any application demand faster and more efficiently.
Purpose-built sounds as though there’s a specific build configuration in mind, although given the preference for density in the datacenter, I’m sure that the HP Blades are included (especially given their place as the most CPU-dense servers listed in the HP server buying guide. (Incidentally, if you’ve never worked with blades, wikipedia has a good introduction. I switched most of my production machines to that form factor a couple of years ago, and I’ve been very happy). Devang Panchigar had some interesting thoughts when he attended the HP Blades Day in the beginning of March. (boy, this X day thing is going around, huh?).
The P2000 and P4000 are, comparatively, boring old SAN arrays. I mean boring to enterprise people, of course. They’re relatively exciting to me, because I could imagine myself having one someday, as opposed to a Converged Infrastructure, which to me means “hey, all my stuff is ethernet!”.
The P2000 compares pretty closely to my Clariion AX-4, and without a price comparison, I don’t see any features that make me excited. The P4000 is a bit bigger, looks like it can be expanded to 60TB, which beats the low end CX3. Again, it’s a SAN array.
The X9000 is a whole other story. Essentially, the X9000 is network addressable storage (NAS). I know what you’re thinking, “NAS is file level, not block level. I can’t do anything cool with it”. Well, look. There are still lots of things you can’t do over a NAS (such as run PostgreSQL, which I learned this week), but lots of things DO work over it, such as VMware ESXi mounting remote NFS shares as datastores.
The question of performance is a tad bit deeper. With any storage, there is a triad of features that you constantly balance between.
There is the speed at which the data can be accessed by the hard drives. This is mainly a function of the speed of the drives, the number of drives, and the RAID level at which the data is stored.
Then there is the speed at which you can retrieve information from the collective array. This is going to be the communication channel between you and the storage. In the case of Directly Attached Storage (DAS), you can use Ultra320 SCSI to obtain a dedicated 2560Mb/s channel between you and the array. You might think “oh, sure, but even pokey old fibre channel is up to 8Gb/s”, but the key word in the previous sentence is “dedicated”.
If you have an array attached via SCSI bus, the data is going from the array to your computer (probably. work with me here). There’s no other computer to share that bandwidth. In the case of FC, your array might have 8Gb/s cards. You might even bond two of them together for 16Gb. But how many computers are going to be vying for a piece of that pie? By segregating your data intelligently, you can reduce the load to the point where most things can access your data at a reasonable speed. With multiple arrays and controllers, you can even get access up to speed, but that is a lot of money invested, compared to a relatively cheap external RAID array. Edit To learn more about accessing disks for maximum performance, check out this great post by Stephen Foskett, titled Multipath: Active/Passive, Dual Active, and Active/Active
Fortunately, 10Gb/s ethernet is here to speed up the access of remote storage. This higher speed bus promises to make it faster and maybe (eventually) cheaper to access NAS (or iSCSI SAN) resources.
This 10GbE technology is what the X9000 uses. It needs teamed up with an StorageWorks X9300 Network Storage Gateway (which, as far as I can tell, is the CIFS/NFS portion of the show). Sure, you can only mount it via NFS and CIFS, but with a properly leveraged virtual infrastructure, that’s all you need.
One of the coolest parts is really only slightly mentioned on the X9000 page.
Q3. Do the X9000 products contain the software HP recently acquired from IBRIX, Inc.?
That made me curious. If it’s important enough to ask, why isn’t it explained. Or linked to. Or something. So I did research. In July of 2009, HP bought Ibrix, a company that specialized in cluster filesystems. To quote Robin Harris:
With Polyserve (transactional cluster storage), LeftHand Networks (cluster block storage) and now IBRIX (scale out cluster file storage), HP owns most of the prime real estate in the cluster storage market.
Interesting. I’m really looking forward to seeing how this technology is put to use. If you want some more interesting reading in the meantime, check out ibrix.com on the internet archive.