FCoE Discussion

Date September 24, 2009

There's a lot of discussion going on in the blogosphere today about storage technologies. Today, the subject is Fibre Channel vs Fibre Channel over Ethernet.

Technologically, they use the same protocol to transfer information , but FCoE utilizes (comparatively) cheaper ethernet technologies to transmit the data. Of course, cheaper is very much relative. The current discussion is about 10Gb ethernet. That's a LOT of money.

The primary driver behind the FCoE movement is that ethernet is actually faster than fibre channel, who's speed record is looking like 8Gb/s (at least, that's the fastest switch I've found).

I was going to link to the big discussions going on, but Chuck's Blog does a good job of it.

I maintain my stance that regardless of what transport method we use to get the data where it's going, in a couple of years, we'll be using fiber. It's the only long term answer because it offers nearly unlimited throughput.

If you're wondering what all this stuff is talking about, see the Introduction to Enterprise Storage.

  • David Magda

    When the storage vendors talk about FCoE, they're generally not talking about on-board NICs: they're talking about Converged Network Adapters (CNAs).

    If you have 10 GigE, why not not just go with iSCSI (or NFS)? If you have to buy a CNA, a special switch, and a special gateway to your storage, where are the savings supposed to occur? You don't have to string fibre, great; but other than that, it doesn't make sense to me.

    When servers start shipping with 10 gigE standard, I doubt it will be these special CNAs.

    And if you really want bandwidth (and low latency), you can get InfiniBand adapters that can go up to 96 Gb/s right now. They do IP just fine, so you can run any standard protocol over them just fine (often used in HPC, and also in Oracle/Sun's new Exadata v2 product).

  • http://www.standalone-sysadmin.com Matt Simmons

    I'm not sure what makes people select FCoE rather than iSCSI, but it might be that iSCSI seems to have a stigma among large-scale storage people. Maybe it doesn't, but I've sort of got that impression.

    Personally, I think AoE is perfectly acceptable for small-scale (and probably medium scale) networks. It's weakness is that the hardware arrays are only produced by coraid, even though the software target code is freely available and in the kernel.

    eh. I don't have big enough storage arrays to worry about it. But a lot of people do.

  • http://blogstu.wordpress.com/ Stuart Miniman

    For small configurations, there are lots of options (iSCSI is doing quite well). It's not the speeds and feeds, but about the impact to an existing customer environment. InfiniBand is a nice technology - high bandwidth, good pricing, super low latency, but it requires new adapters, new drivers (which require qualification of your stack), new cabling and new tools to manage. What FCoE does is allow the customers that have a large investment in FC and comfort with the reliability, scalability and manageability to move towards a total Ethernet environment. If you don't have FC, you're likely not going to go to FCoE. There are no indications that Ethernet is going everywhere - both iSCSI and FCoE allow customers to move towards a completely Ethernet environment. FCoE does require specialized hardware (new adapters and switches), and these are more expensive today than "standard" Ethernet, but give it time. I would say that it is FC AND FCoE (not vs.).

  • http://www.standalone-sysadmin.com Matt Simmons

    Thanks, Stuart. I suppose it's an entirely different viewpoint when you scale storage up to the point that a lot of people have. I appreciate the comment, and I agree. Eventually, we'll all wonder what we did on the serverside without 10GbE

  • http://topher.livejournal.com/ Christopher Cashell

    I still have yet to see a compelling argument for why I should care about FCoE. I've made heavy use of Fiber Channel, and in some situations I've made use of iSCSI. I was originally rather skeptical about iSCSI, but it pleasantly surprised me with its performance and stability.

    Regardless, I keep seeing claims that FCoE will allow us to 'leverage our existing experience and technology' from Fiber Channel, but considering that FCoE is more similar to iSCSI than it is to real Fiber Channel, I'm just not buying it. The fact that it's going to require entirely new hardware (that doesn't match standard Ethernet or Fiber channel (ConvergedNetwork Adapters)), I'm not buying it. The fact that it requires an extension to the Ethernet protocol, means I'm not buying it. The fact that the standards aren't even finalized yet, means I'm not buying it.

    So, as someone who's made good use of Fiber Channel, and made limited use of iSCSI, I have to say I haven't been able to come up with any case where I would use FCoE, nor has anyone yet managed to offer me a plausible scenario where it would make sense (given the already available alternatives).

  • http://www.xsigo.com Jon Toor

    Virtual I/O (FCoE is a one transport that enables virtual I/O) helps in dynamic and / or somewhat complex environments.

    That is, it is useful when you need more than one or two physical connections to each server, or you need the server connections to be dynamic. Virtual I/O makes it easy to respond when, for example, when a VM is moved to a server but the new server is not connected to the needed physical networks. In that case, you can add the neeed NIC or HBA to the live server. No need to take it down and disrupt the other VMs running on it.

    Shops running a lot of VMs per server routinely run 8 to 12 cables to each server, which is expensive and error prone. Virtual I/O cleans that up, which is why VMware used virtual I/O in all their servers at VMworld.

    FCoE is just one way of doing this. Xsigo (http://www.xsigo.com) powered all the servers in the VMware booth at VMworld. And other vendors showed gear based on PCI-Express.

  • Pingback: Planet Network Management Highlights – Week 39()