Blog

Boost your VNF’s performance by 30x without lifting a finger

9th February 2017
Anna Flockett
0

Moving beyond initial trials of Network Functions Virtualisation (NFV), as service providers start to plan actual deployments of virtualised applications, means economics of this transition can come under increasing scrutiny.

Guest blog by Charlie Ashton.

After all, why would you take the risk of deploying new, risky technology unless the Return on Investment (RoI) is both significant and quantifiable?                                                     

One of the major goals of NFV is to reduce Operating Expenses (OPEX) and one specific element of the NFV architecture that has a major effect on OPEX is the virtual switch, or vSwitch. As part of the NFV infrastructure platform, the vSwitch is responsible for switching network traffic between the physical world (the core network) and the virtual world (the Virtual Network Functions or VNFs).

Because the vSwitch runs on the same server platform as the VNFs, processor cores that are required for running the vSwitch are not available for running VNFs and this can have a significant effect on the number of subscribers that can be supported on a single server blade. This in turn impacts the overall operational cost-per-subscriber and has a major influence on the OPEX improvements that can be achieved by a move to NFV.

In this blog it will explain how an important new feature in the latest version of Wind River Titanium Server virtualisation platform enables VNF suppliers to accelerate their packet throughput by 30x without changing a single line of code, thanks to the Accelerated vSwitch (AVS) that’s an integral part of Titanium Server. They can also boost the performance of VNFs based on the Intel DPDK library significantly more with just a simple recompilation.

Within the Titanium Cloud ecosystem, there is almost 30 partners who provide VNFs. For most of these companies, the performance of their VNF is a key element not only in how they differentiate themselves from their competitors but also in how they help their service provider customers quantify the business advantages of migrating from physical network appliances to virtualised applications.

When working with a partner that has an existing VNF that they want to run on Titanium Server, their first objective is typically to do a functional test and ensure that the application functions identically on Titanium Server, compared to how it runs on another virtual switch such as Open vSwitch (OVS).  As long as the VNF uses the standard VirtIO Linux driver (and they all do), this is a quick step. AVS is fully compatible with VirtIO, so the existing VNF runs unmodified on Titanium Server. No need for any code changes, no need for any recompilation.

That first step results in a VNF that runs fine on Titanium Server, but it doesn’t deliver a performance boost. To take advantage of the performance features in AVS, there are a couple of options available to our partners.

The first option:
The latest version of Titanium Server includes full support for the vhost DPDK / user-level backend for Virtio networking. vhost reduces virtualisation overhead by moving Virtio packet processing tasks out of the qemu process and sending them directly to the DPDK-accelerated vSwitch, via the vhost-user driver.

This results in reduced latency and better performance than VirtIO. To take advantage of the vhost support in Titanium Server, the VNF supplier only needs to make sure that their VNF is running on the latest version of Titanium Server. No changes required to the VNF itself.

When running VNFs on a platform based on OVS, enabling a VirtIO vhost back end in the host will typically deliver a performance improvement of up to 15x compared to a baseline VirtIO kernel implementation.

But using vhost in a VNF running on Titanium Server will typically double that performance, resulting in a performance improvement of up to 30x compared to using VirtIO kernel interfaces with OVS, depending of course on the details of the VNF and its actual bandwidth requirements.

The second option:
This applies if the VNF has been designed to use DPDK. In this case, much higher performance is possible when using Titanium Server, simply by linking in an open-source AVS-aware driver, which in experience takes 15 minutes or so. The AVS DPDK Poll Mode Driver (PMD) is available free of charge at Wind River’s open-source repository, hosted on Github.

Just as with the vhost scenario, there’s no need to maintain a special version of the VNF to use with AVS: once the AVS DPDK PMD has been compiled into the VNF, it’s initialised at runtime as needed whenever the VNF is running on a virtualisation platform that is detected to be Titanium Server.

Adding the AVS DPDK PMD to a VNF will typically deliver a performance improvement of up to 40x compared to using VirtIO kernel interfaces with OVS, depending of course on the details of the VNF itself and its actual bandwidth requirements.

After working closely with many VNF partners, we have seen AVS support to be seamless, quick and high-value, based on the performance improvements that it brings. The initial bring-up / functional test step requires no change to the VNF. Up to 30x performance (vs. VirtIO kernel on OVS) is achieved with no code changes at all through using the standard Virtio interface.

For DPDK-based VNFs, a straightforward recompilation to add the AVS DPDK PMD results in up to a 40x performance improvement compared to a configuration using VirtIO kernel interfaces.

By using whichever of the these two open-source drivers is applicable, VNF partners can fully leverage the performance features of AVS, allowing them to deliver VNFs with compelling performance to service providers deploying NFV in their infrastructure.

Courtesy of Wind River.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier