Netronome prepares cards for SDN acceleration
Wednesday, April 30, 2014 at 8:34AM
Roy Rubenstein in FlowNICS, NFP-6xxx, NVGRE, Netronome, Open vSwitch, OpenFlow, Robert Truesdell, Software-defined networking, virtual switch

 

Source: Netronome

Netronome has unveiled 40 and 100 Gigabit Ethernet network interface cards (NICs)  to accelerate data-plane tasks of software-defined networks.

Dubbed FlowNICs, the cards use Netronome's flagship NFP-6xxx network processor (NPU) and support open-source data centre technologies such as the OpenFlow protocol, the Open vSwitch virtual switch, and the OpenStack cloud computing platform.

Four FlowNIC cards have been announced with throughputs of 2x40 Gigabit Ethernet (GbE), 4 x 40GbE, 1 x 100GbE and 2 x 100GbE. The NICs use either two or four PCI Express 3.0 x8 lane interfaces to achieve the 100GbE and 200GbE throughputs. 

"We are the first to provide these interface densities on an intelligent network interface card, built for SDN use-cases," says Robert Truesdell, product manager, software at Netronome. "We have taken everything that has been done in Open vSwitch that would traditionally run on an [Intel] x86-based server, and run it on our NFP-6xxx-based FlowNICs at a much faster rate." 

The cards have accompanying software that supports OpenFlow and Open vSwitch. There are also two additional software packages: cloud services, which performs such tasks as implementing the tunnelling protocols used for network virtualisation, and cyber-security.

Implementing the switch and packet processing functions on a FlowNIC card instead of using a virtual switch frees up valuable computation resources on a server's CPUs, enabling data centre operators to better use their servers for revenue-generating tasks. "They are trying to extract the maximum revenue out of the server and that is what this buys," says Truesdell.

Designing the FlowNICs around a programable network processor has other advantages. "As standards evolve, that allows us to do a software revision rather than rely on a hardware or silicon revision," says Truesdell. The OpenFlow specification, for example, is revised every 6-12 months, he says, the most recent release being V1.4.

In the data centre, software-defined networking (SDN) uses a central controller to oversee the network connecting the servers. A server's NIC provides networking input/ output, while the server runs the software-based virtualised switch and the hypervisor that supports virtual machines and operating systems on which the applications run. Applications include security tasks such as SSL inspection, intrusion detection and intrusion prevention systems; load balancing; business applications like email, the hypertext transfer protocol and instant messaging; and gateway functions such as firewalls and virtual priviate networks (VPNs).

 

We have taken everything that has been done in Open vSwitch that would traditionally run on an x86-based server, and run it on our NFP-6xxx-based FlowNICs at a much faster rate

 

With Open vSwitch running on a server, the SDN protocols such as OpenFlow pass communications between the data plane device - either the softswitch or the FlowNIC - and the controller. "By supporting Open vSwitch and OpenFlow, it allows us to inherit compatibility with the [SDN] controller and orchestration layers," says Truesdell. The FlowNIC is compatible with controllers from OpenDaylight, Ryu and VMWare's NFX network virtualisation system, as well as orchestration platforms like OpenStack.

The virtual switch performs two main tasks. It inspects packet headers and passes the traffic to the appropriate virtual machines. The virtual switch also implements network virtualisation: setting up network overlays between servers for virtual machines to talk to each other.  

Packet encapsulation and unwrapping performance required for network virtualisation tops out when implemented using the virtual switch, such that the throughput suffers. Moreover, packet header inspections can be nested if, for example, encryption and network virtualisation are used, further impacting performance. "The throughput rates can really suffer on a server because it is not optimised for this type of workload," says Truesdell.

Performing the tasks using the FlowNIC's network processor enables packet processing to keep up with the line rate including for the most demanding, shortest packet sizes. The issue with packet size is that the shorter the packet (the smaller the data payload), the more frequent the header inspections. "Data centre traffic is very bursty," says Truesdell. "These are not long-lived flows - they have high connection rates -  and this drives the packet sizes down."

For a 10 Gigabit stream performing the Network Virtualization using Generic Routing Encapsulation (NVGRE) protocol, the forwarding throughput is at line rate for all packet sizes using the company's existing FlowNIC acceleration cards, based on its previous generation NFP-3240 network processor.

In contrast, NVGRE performance using Open vSwitch on the server is at 9Gbps for lengthy 1,500-byte packets and drops continually to 0.5Gbps for 64-byte packets. The average packet length is around 400 bytes, says Truesdell.

Overall, Netronome claims that using its FlowNICs, virtualised networking functions and server applications are boosted by over twentyfold compared to using the virtual switch on the server's CPU.

Netronome's FlowNIC-32xx cards are already used by one Tier 1 operator to perform gateway functions. The gateway, overseen using a Ryu controller running OpenFlow, translates between multi-tenant IP virtual LANs in the data centre and MPLS-based VPNs that connect the operator's enterprise customers.  

The NFP-6xxx-based FlowNICs will be available for early access partners later this quarter. FlowNIC customers include data centre equipment makers, original design manufacturers and the largest content service providers - the 'internet juggernauts' - that operate hyper-scale data centres.   

 

 For an article written for Fibre Systems on network virtualisation and data centre trends, click here

Article originally appeared on Gazettabyte (https://www.gazettabyte.com/).
See website for complete article licensing information.