At last's month Storage Networking World, SAN vendors demonstrated a technology that in the short and medium term will allow storage traffic to share the same Ethernet pipes as other data traffic, and in the long term could entirely replace Fibre Channel.
The protocol is called Fibre Channel over Ethernet. First publicised only earlier this year, it has already received full support from the key players in the storage networking industry.
The first standard for the protocol is slated for completion in the second half of next year. The dominant SAN switch makers Brocade and Cisco have already said that they will ship FCoE switching gear at about the same time, as have the duopoly of QLogic and Emulex that controls the market for SAN server adaptor cards.
In the short term, the convergence of storage and messaging or server clustering data traffic onto shared links will lower both capital and operational costs for customers, and will eliminate cabling headaches in crowded data centres.
In the long term, FCoE has the potential to entirely replace Fibre Channel. But FCoE will only be worth using when running over a future loss-less version of Ethernet, and the rate of FCoE take-up will depend on how fast this new Ethernet technology is developed, and how much it costs.
FCoE nuts and bolts
The Storage Networking Industry Association says that FCoE should be thought of as relocating Fibre Channel onto Ethernet, leaving it unchanged as Fibre Channel.
That means that FCoE links should connect seamlessly to existing Fibre Channel SANs, being handled by existing management software and supporting existing SAN functions such as logical partitioning and security features.
FCoE encapsulates Fibre Channel frames inside Ethernet frames.
It can run on standard Ethernet, but the only practical way to use it will be to run it on a forthcoming loss-less version of Ethernet, Not Your Father's Ethernet.
To use FCoE as an extension to existing Fibre Channel storage networks, IT staff will need to deploy:
* FCoE adapter cards - in servers, to connect them to Ethernet links. Unlike iSCSI, FCoE will not require new software drivers to be added to server operating systems, because the FCoE adapters will appear identical to Fibre Channel adapter cards or HBAs.
* FCOE-friendly Ethernet switches - either Fibre Channel-and-Ethernet, or Ethernet-only devices. The former will be the first to ship, and will convert FCoE traffic to Fibre Channel traffic, and vice versa. The Storage Networking Industry Association is predicting that some vendors will offer combined Ethernet and FCoE switches.
Brocade has already promised to ship FCoE-capable Ethernet switch blades for its directors.
The immediate advantage of FCoE for customers will be in the hardware and management cost savings delivered by around the convergence of storage traffic and messaging and server clustering traffic onto one set of pipes:
* Dual-purpose adapter cards - vendors say that these high-speed 10 Gbps devices will cost around the same as current Fibre Channel adaptor cards. But they will do double duty, carrying both Ethernet data traffic and FCoE storage traffic. This will halve the number of adapter cards and links needed to be managed for each server.
* Dual-purpose cabling - cabling might appear mundane, but it can be an expensive and influential part of a data centre. One constraint on data centre growth can be lack of space inside cable conduits, or beneath raised floors.
* Cheaper Ethernet switching - because of the production volumes involved, Ethernet switch ports have always been much cheaper than their Fibre Channel equivalents.
A Challenge for the Fibre Channel throne
Although it will first be used to augment existing Fibre Channel networks, in the long term FCoE has the potential to entire replace Fibre Channel.
FCoE can travel across Ethernet networks with no Fibre Channel switches involved at all - on two conditions. The first according to the Ethernet Alliance is that there are FCoE-enabled Ethernet switches at the beginning and end of the journey, at the edges of the Ethernet network. The second is that disk array and tape library vendors fit FCoE ports to their products.
Obviously the latter will not happen until vendors consider it worth the effort. It took time for them to add iSCSI ports to their gear, and it will take time for them to add FCoE ports.
One of last month's demonstrations at SNW featured a disk array made by Network Appliance that was connected directly to a server via an FCoE link, with no switch of any sort involved.
NetApp boxes are unusual in that they use removable adapter cards as front-end ports. For the demonstration, NetApp used a prototype FCoE server adapter modified to work as a target rather than an initiator.
FCoE and the rest of the protocol pack
One obvious question about FCoE is what makes it different from the alphabet soup of other Ethernet-friendly storage protocols, such as FCIP and iSCSI.
Alongside the ability to blend in seamlessly with existing Fibre Channel networks, FCoE outplays those other protocols by offering a low level of latency that makes it suitable for high-end storage applications.
It does that by running directly on Ethernet, so side-stepping the TCP/IP transport and network protocols that introduce latency.
But SNIA stresses that while FCoE's independence from TCP/IP makes it suitable for data centre networking, it makes it much less suitable for wide-area networking.
That means that FCoE is not going to replace FCIP, which was created specifically for use in wide-area links between storage networks.
Neither is FCoE going to replace iSCSI, which can also be used for wide-area storage networking. Elsewhere, in local storage networks, iSCSI will be able to hold its own against FCoE wherever customers need to run low-cost storage networks, and do not need data centre performance.
A protocol hierarchy - from an FC vendor's perspective
This is how Brocade says it expects iSCSI, FcoE, and Fibre Channel to be used:
* iSCSI - a low cost option for small and mid-sized businesses, for which servers can use software-based iSCSI initiators talking via commodity Ethernet adaptor cards and Ethernet switches to low cost iSCSI storage.
* FCoE - a good fit for servers that can benefit from the convergence of networking traffic - server clustering, messaging and storage traffic - onto a 10 Gbps Ethernet common interface in order to reduce networking costs.
* Fibre Channel - according to Brocade, this is the right solution for applications that generate high volumes of storage traffic.
Bear in mind that Brocade's placement of Fibre Channel at the top of the stack might reflect the fact that the company's core business is the manufacture of Fibre Channel switches and directors.
Although Brocade claims that Fibre Channel will beat off FCoE whenever high throughput links are needed, there is no huge difference between Fibre Channel and FCoE throughput. FCoE on a 10 Gbps Ethernet link will easily encompass the throughput of an 8 Gbps Fibre Channel pipe, according to the Ethernet Alliance.
But Brocade argues that when such large volumes of storage traffic are being carried, there will be no advantage in using FCoE pipes.
That is because there will be no excess capacity to carry any other type of traffic, and therefore there will no shared-cabling or convergence benefits. Customers might just as well stick with their existing 8 Gbps or 10 Gbps Fibre Channel links, Brocade says.
Of course if they do that, they will be continuing to maintain two types of network technology.
Not your father's Ethernet
Technically, it will be possible to run FCoE on standard Ethernet.
But it would not be sensible to do so, because high-end storage applications are sensitive to latency, and standard Ethernet tends to introduce latencies, beyond those of TCP/IP, by dropping and then having to resend data packets.
So FCoE will be carried on a lossless version of Ethernet, for which the IEEE is currently developing a standard. This version of Ethernet has yet to be officially named, but is variously called Lossless Ethernet, Data Center Ethernet or DCE, and Converged Enhanced Ethernet.
According to the Ethernet Alliance marketing and educational organisation, DCE gear is already being prototyped, and will ship in a year to 18 months' time, so it is on a schedule running only a little later that FCoE.
It is not just FCoE that is driving the creation of DCE, but the potential for a wider convergence that would include applications such as voice-over-IP and IP-based television. As the Alliance said: "Suddenly the bucket is much larger."
How much will DCE cost compared to existing Ethernet? Neither Cisco nor the Ethernet Alliance would say, but the Alliance predicted that within five to 10 years' time DCE will be the new standard Ethernet.
In the interim, customers will mostly only need to install DCE gear at the edges of networks, the Alliance said. That is because existing high-end 10 Gbit Ethernet switches can already read traffic management tags attached to Ethernet frames.
But the traffic management that is already a feature of high-end Ethernet gear is not sufficient to deliver the loss-less throughput needed for high-end storage traffic, according to both Cisco and the Ethernet Alliance. The latter says that the 'pause' function already common in Ethernet gear will not be a practical solution.
Why not carry iSCSI on loss-less Ethernet? Because iSCSI is wedded to TCP/IP which introduces latencies, and because according to SNIA, iSCSI needs 'heavyweight gateways' to terminate and re-initiate SCSI sessions, unlike the 'lightweight frame mappers' needed to wrap and unwrap FCoE in Ethernet.
Waiting for 10 Gigabit Ethernet
FCoE will only make sense running over 10 Gbps Ethernet. 10 GbE has been shipping since around 2003, but has been expensive and still only used by a minority of customers, as the Ethernet Alliance admits. But the Alliance insists that there are signs that wider take-up will begin next year.
When or if that happens, increasing sales volumes will help drive down 10 GbE costs, accelerating take-up. Another spur will be the effects of the ratification last year of the 10G-BaseT standard that allows 10 GbE to be implemented without expensive optical adaptors, over useful distances on friendly 10BaseT copper cable.
But do not hold your breath. The Ethernet Alliance itself describes 10 GbE on copper as a 'bleeding edge' technology, and acknowledges that 10 GbE has been hovering on the 'cusp' of volume take-up for a couple of years now.
SAN vendors' universal and enthusiastic support for the FCoE underlines the strong prospects for the protocol. Nobody wants to be left out.
Fibre Channel's installed base will ensure that it will continue to be the protocol of choice for storage networking for some years yet, and the initial use of FCoE will only be as a means of augmenting existing Fibre Channel networks.
One factor affecting the speed of FCoE take-up will be the availability and cost of DCE and 10 GbE. DCE is set to ship later than FCoE and the rate at which its prices and those of 10 GbE fall will be very important. As Brocade has pointed out, just because FCoE has Ethernet in its name does not mean that it is going to run on cheap commodity equipment.
Another reason why Fibre Channel is not going to disappear overnight is that while FCoE offers to cut capital and operational costs, it does not actually offer any performance or functional advantages.
Nevertheless, there is a strong potential for FCoE to displace Fibre Channel even at the core of storage networks.
A very similar story was told five years ago, when iSCSI was widely tipped as a successor to Fibre Channel. But unlike FCoE, iSCSI never had the potential to slot in seamlessly with Fibre Channel, or to offer the low latencies needed in high-end storage networks.