40G and 100G Ethernet: First uses of the high-speed interfaces
Sunday, September 20, 2009 at 10:59PM
Roy Rubenstein in 100G, 40G, IEEE Task Force, gazettabits, optical transceivers

 

Operators, enterprises and equipment vendors are all embracing 100 Gigabit technologies even though the standards will only be completed in June 2010.

Comcast and Verizon have said they will use 100Gbit/s transmission technology once it is available. Juniper Networks demonstrated a 100 Gigabit Ethernet (GbE) interface on its T1600 core router in June, while in May Ciena announced it will supply 100Gbit/s transmission technology to NYSE Euronext to connect its data centers.

Ciena’s technology is for long-haul transmission, outside the remit of the IEEE’s P802.3ba Task Force’s standards work defining 40GbE and 100GbE interfaces. But the two are clearly linked: the emergence of the Ethernet interfaces will drive 100Gbit/s long-haul transmission.

ADVA Optical networking foresees two applications for metro and long-haul 100Gbps transmission: carrying 100Gbps IP router links, and multiplexing 10Gbps streams into a 100Gbps lightpath. “We see both: for router and switch interfaces, and to improve fibre bandwidth,” says Klaus Grobe, principal engineer at ADVA Optical Networking.

The trigger for 40Gbit/s market adoption was the advent of OC-768 SONET/SDH 2km reach interfaces on IP core routers. In contrast, 40GbE and 100GbE interfaces will be used more broadly. As well as IP routers and multiplexing operators’ traffic, the standards will be used across the data centre, to interconnect high-end switches and for high-performance computing.

The IEEE Task Force is specifying several 40GbE and 100GbE standards, with copper-based interfaces used for extreme short reach, while optical addresses interfaces with reaches of 100m, 10km and 40km.

For 100m short-reach links, multimode fibre is used: four fibres at 10Gbps in each direction for 40GbE and ten fibres at 10Gbps in each direction for 100GbE interfaces. For 40 and 100GbE 10km long reach links, and for 100GbE 40km extended reach, single mode fibre is used. Here 4x10Gbps and 4x25Gbps are carried over a single fibre using wavelength division multiplexing (WDM).

“Short reach optics at 100Gigabit uses a 10x10 electrical interface that drives 10x10 optics,” says John D’Ambrosia, chair of the IEEE P802.3ba Task Force. “The first generation of 100GBASE-L/ER optics uses a 10x10 electrical interface that then goes to 4x25 WDM optics.”

The short reach interfaces reuse 10Gbps VCSEL and receiver technology and are designed for high density, power-sensitive applications. “The IEEE chose to keep the reach to 100m to give a low cost solution that hits the biggest market,” says D’Ambrosia, although he admits that a 100m reach is limiting for certain customers.

Cisco Systems agrees. “Short reach will limit you,” says Ian Hood, Cisco’s senior product marketing manager for service provider routing and switching. “It will barely get across the central office but it can be used to extend capacity within the same rack.” For this reason Cisco favours longer reach interfaces but will use short reach ‘where convenient’.

D’Ambrosia would not be surprised if a 1 to 2km single mode fibre variant will be developed though not as part of the current standards. Meanwhile, the Ethernet Alliance has called for an industry discussion on a 40Gbps serial initiative.

Within the data centre, both 40GbE and 100GbE reaches have a role.

A two-layer switching hierarchy is commonly used in data centres. Servers connect to top-of-rack switches that funnel traffic to aggregation switches that, in turn, pass traffic to the core switches. Top-of-rack switches will continue to receive 1GbE and 10GbE streams for a while yet but the interface to aggregation switches will likely be 40GbE. In turn, aggregation switches will receive 40GbE streams and use either 40GbE or 100GbE to interface to the core switches. Not surprisingly, first use of 100GbE interfaces will be to interconnect core Ethernet switches.

Extended reach 100GbE interfaces will be used to connect equipment up to 40km part, between two data centres for example. But only when a single 100GbE link over the fibre pair is sufficient. Otherwise dense WDM technology will be used.

Servers will take longer to migrate to 40 and 100GbE. “There are no 40GbE interfaces on servers,” says Daryl Inniss, Ovum’s vice president and practice leader components. “Ten gigabit interfaces only started to be used [on servers] last year.” Yet the IT manager in one leading German computing centre, albeit an early adopter, told ADVA that he could already justify using a 40GbE server interface and expected 100GbE interfaces would be needed by 2012.

Two pluggable form factors have already been announced for 100GbE. The CFP supports all three link distances and has been designed with long-haul transmission in mind, says Matt Traverso, senior manager of technical marketing at Opnext. The second, the CXP, is designed for compact short reach interfaces. For 40GbE more work is needed.

Juniper’s announced core router card uses the CFP to implement a 100m connection. Juniper’s CFP is being used to connect the router to a DWDM platform for IP traffic transmission between points-of presence, and for data centre trunking.

So will one 40GbE or 100GbE interface standard dominate early demand? Opnext’s Traverso thinks not. “All the early adopters have one or two favourite interfaces – high-performance computing favours 40 and 100GbE short reach while for core routers it is long reach 100GbE,” he says. “All the early adopters have their chosen interfaces before they will round out their portfolio.”

This article appeared in the exhibition magazine at ECOC 2009.

 

Article originally appeared on Gazettabyte (https://www.gazettabyte.com/).
See website for complete article licensing information.