Tech Field Day Preview: HP

This is the first of the Tech Field Day Sponsor Overviews

Unless you live under a very large rock, you’re probably familiar with Hewlett-Packard. They are, and have been, one of the largest manufacturers of end user and enterprise machines for a long time. It’s sort of fitting that I’m covering them first, because the GestaltIT Tech Field Day grew from the HP Tech Day that they hosted in 2009.

As it turns out, Monday and Tuesday of this week was the HP Storageworks Tech Day, and several of my co-delegates were in attendance. Incidentally, if you’re into twitter, you can see the tweets with #HPStorageDay.

For starters, lets begin by looking at some of the technology on display at the storage day. Looking at the itinerary, we can definitely see some emphasis being placed on converged infrastructure, and then in the afternoon, some focus on Enterprise Virtual Arrays (EVAs), and then some focus on the X9000, along with the P2000 and the P4000. I don’t know about you, but I don’t have any HP gear, and the arbitrary numbers will take a little bit to get used to!

Converged Infrastructure. That’s a heck of a term, and pretty ambiguous. Fortunately, HP set up a Converged Infrastructure website, where we can figure out what they’re talking about, specifically.

A Converged Infrastructure enables you to rebalance this ratio by realigning today’s traditional technology silos into adaptive pools that can be shared by any application, optimized and managed as a service.

By transitioning away from this product-centric approach that has created unyielding IT sprawl to a shared-service management model, you can accelerate standardization, reduce operational costs and accelerate business results

I do have to wonder how hard they worked not to include the word “cloud” in that description.

From everything I can tell, they do essentially make a…sigh…cloud…out of your equipment. The underlying idea is that the Converged Infrastructure is really a layer of abstraction, which takes all of your assets and allows you to create “resource pools”. Although these resource pools are abstracted away from the specific hardware, it doesn’t sound as though just any old machines will do the trick:

HP Virtual Resource Pools are created from purpose-built systems able to create adaptive, shared capacity that can be combined, divided and repurposed to match any application demand faster and more efficiently.

Purpose-built sounds as though there’s a specific build configuration in mind, although given the preference for density in the datacenter, I’m sure that the HP Blades are included (especially given their place as the most CPU-dense servers listed in the HP server buying guide. (Incidentally, if you’ve never worked with blades, wikipedia has a good introduction. I switched most of my production machines to that form factor a couple of years ago, and I’ve been very happy). Devang Panchigar had some interesting thoughts when he attended the HP Blades Day in the beginning of March. (boy, this X day thing is going around, huh?).

The P2000 and P4000 are, comparatively, boring old SAN arrays. I mean boring to enterprise people, of course. They’re relatively exciting to me, because I could imagine myself having one someday, as opposed to a Converged Infrastructure, which to me means “hey, all my stuff is ethernet!”.

The P2000 compares pretty closely to my Clariion AX-4, and without a price comparison, I don’t see any features that make me excited. The P4000 is a bit bigger, looks like it can be expanded to 60TB, which beats the low end CX3. Again, it’s a SAN array.

The X9000 is a whole other story. Essentially, the X9000 is network addressable storage (NAS). I know what you’re thinking, “NAS is file level, not block level. I can’t do anything cool with it”. Well, look. There are still lots of things you can’t do over a NAS (such as run PostgreSQL, which I learned this week), but lots of things DO work over it, such as VMware ESXi mounting remote NFS shares as datastores.

The question of performance is a tad bit deeper. With any storage, there is a triad of features that you constantly balance between.

There is the speed at which the data can be accessed by the hard drives. This is mainly a function of the speed of the drives, the number of drives, and the RAID level at which the data is stored.

Then there is the speed at which you can retrieve information from the collective array. This is going to be the communication channel between you and the storage. In the case of Directly Attached Storage (DAS), you can use Ultra320 SCSI to obtain a dedicated 2560Mb/s channel between you and the array. You might think “oh, sure, but even pokey old fibre channel is up to 8Gb/s”, but the key word in the previous sentence is “dedicated”.

If you have an array attached via SCSI bus, the data is going from the array to your computer (probably. work with me here). There’s no other computer to share that bandwidth. In the case of FC, your array might have 8Gb/s cards. You might even bond two of them together for 16Gb. But how many computers are going to be vying for a piece of that pie? By segregating your data intelligently, you can reduce the load to the point where most things can access your data at a reasonable speed. With multiple arrays and controllers, you can even get access up to speed, but that is a lot of money invested, compared to a relatively cheap external RAID array. Edit To learn more about accessing disks for maximum performance, check out this great post by Stephen Foskett, titled Multipath: Active/Passive, Dual Active, and Active/Active

Fortunately, 10Gb/s ethernet is here to speed up the access of remote storage. This higher speed bus promises to make it faster and maybe (eventually) cheaper to access NAS (or iSCSI SAN) resources.

This 10GbE technology is what the X9000 uses. It needs teamed up with an StorageWorks X9300 Network Storage Gateway (which, as far as I can tell, is the CIFS/NFS portion of the show). Sure, you can only mount it via NFS and CIFS, but with a properly leveraged virtual infrastructure, that’s all you need.

One of the coolest parts is really only slightly mentioned on the X9000 page.

Q3. Do the X9000 products contain the software HP recently acquired from IBRIX, Inc.?
A3. Yes.

That made me curious. If it’s important enough to ask, why isn’t it explained. Or linked to. Or something. So I did research. In July of 2009, HP bought Ibrix, a company that specialized in cluster filesystems. To quote Robin Harris:

With Polyserve (transactional cluster storage), LeftHand Networks (cluster block storage) and now IBRIX (scale out cluster file storage), HP owns most of the prime real estate in the cluster storage market.

Interesting. I’m really looking forward to seeing how this technology is put to use. If you want some more interesting reading in the meantime, check out on the internet archive.

Tech Field Day Preview / Overview

I mentioned yesterday about the Tech Field Day: Boston trip, and how I wasn’t an expert in the fields we were going to cover. Once I decided that I was going to go through with it and take part, I decided that I would need to familiarize myself with the technology that I would be seeing, so as not to make a fool of myself (or at least, to make a lesser fool of myself, however it turns out). It struck me that, at the same time, I can do my readers a service by introducing them to hardware that they have probably not come across yet (although there are a *lot* of you out there, and from some big places. If you’ve got experience on this stuff, please chime in!).

I’m going to do a “technology preview” kind of thing for each of the companies that are sponsoring the event. I don’t know for certain which exact technologies and products they’ll be showing off, but I can make some educated guesses, and if I miss the mark entirely, that’ll be part of the fun.

As a standard disclaimer, I need to say: I am not being paid by these companies. I am taking part in a group of independent bloggers who are being transported to Boston, MA for a few days, given lodging, and shuttled around to various businesses to see products and technologies, and then to discuss those things with the people who were responsible for them.

I am receiving no financial renumeration, although in the past, Tech Field Day delegates have received things like bags, pens, shirts, and a coupon for a discount on future purchases.

We are all attending with the agreement that we are free to write whatever we like, with respect to content, and that we will be free to speak our minds. I will not be asked to sign any nondisclosure agreements (NDAs), although I will respect any requests by the manufacturers to hold information back until an agreed upon date (commonly known as an embargo date).

The first of these preview posts should be coming online in a few hours. I hope you enjoy them as much as I enjoyed learning. Please comment with any questions you’ve got. I’m happy to answer anything.

Small Infrastructure, meet the Enterprise

I mentioned back in February that I was taking part in this year’s Tech Field Day: Boston. It’s coming up next week, and I’m preparing my research on some of the stuff we’ll be seeing.

I want to be up front with everyone. I probably have no business being there. If you click the above link and go to the delegate list, you will see a Who’s Who of experts in enterprise storage, networking, and virtualization. And then there’s me.

I have no deep knowledge of any of those. I’m neither a storage, network, or virtualization admin. I nearly declined the offer, because I wasn’t sure that I could offer anything.

I thought about it, and after discussing things with a friend of mine, he helped me realize that I’m not a storage, network, or virtualization admin. I’m ALL of them. No, I’m not an expert, but I do all of those things.

On this blog, one of my goals is to help make small infrastructures as reliable as large infrastructures, using as similar techniques as possible, and spreading knowledge of enterprise administrative practices among those of us without enterprise-sized networks. What better place to further those goals than in a situation entirely geared toward large networks?

Also, as my friend Andy mentioned to me, I’m duty-bound to represent small infrastructures (i.e. you who are reading this) to large vendors when I have the chance. And this is the chance.