Design

Dual Xeon OpenVPXBlade with high compute density launched

26th April 2018
Lanna Deamer
0

Rackmount servers have their place, yet already-deployed defence platforms and the world’s militaries often prefer tried-and-true OpenVPX-style systems for legacy card- and system interoperability. Until now, upgrading those systems with the best 'rack style' server compute engine wasn’t possible using OpenVPX. General Micro Systems (GMS) has changed this entirely with the launch of a 6U, dual-CPU OpenVPX server blade with two of Intel’s Xeon processors - plus the rest of the server, including storage - all on one blade.

With the 'Phoenix' VPX450 OpenVPX 'motherboard' installed in a deployed and rugged air-cooled chassis, server-room performance is available to airborne, shipboard, vetronics and battlefield installations where rackmount servers don’t fit or are inappropriate due to size, ruggedness or foreign sourcing. Phoenix offers the raw server performance, onboard I/O and data transfer to the rest of the OpenVPX system.

The single blade server includes 44 cores and 88 virtual machines, 1TB of the fastest ECC DRAM, 80 lanes of PCIe Gen 3 serial interconnect, dual 40 Gig Ethernet - plus storage and I/O.

“The VPX450 outclasses all other options, packing more compute and communications power than has ever been available in 6U,” said Ben Sharfi, CEO and Chief Architect, GMS. “You cannot top these specs and performance. There’s no way.”

The VPX450 'Phoenix' boasts:

  1. Server Engine - Dual socket 2.2GHz Intel Xeon E5 v4 with 22 cores adds to 44 total cores and 88 virtual machines on one blade, plus 1 TB DDR4 with ECC (industry’s fastest 2,133 MT/s). The CPUs are reliably cooled using GMS patented RuggedCool specialty heatsinks and CPU retainers for maximum thermal transfer without CPU throttling.
  2. Interconnect Fabric- 80 PCIe Gen 3 lanes at 8Gbps move data between on-card subsystems 68 PCIe Gen 3 lanes to the OpenVPX backplane. The industry’s fastest, they assure 544Gbps bandwidth between the Phoenix server and OpenVPX backplane switch matrix or compute nodes. Eight native SATA III lanes to connect across the backplane to mass storage card(s).
  3. Networking - dual front panel QSFP+ sockets accept Ethernet inserts for 10 and 40Gb, in either copper or fibre. There is no IEEE networking standard in the commercial market faster than 40Gb Ethernet, and it’s available in this single-blade server. In the typical use case, dual 40Gb Ethernet fibre connections provide long-haul communication to distant sensors or intelligent nodes. Two local Ethernet ports (1GbE and 100Base-T) provide service connections for 'low speed' networking.
  4. Flexible Add-in Storage and I/O - The unique VPX450 can add up to four different types of plug-on modules. There are dual SAM I/O PCIe-Mini sites, usually used for MIL-STD-1553 and legacy military I/O. These sites also accept mSATA SSDs for server data storage. An XMC front panel module provides plug-in I/O such as for a video frame grabber or software-defined radio. Lastly, GMS provides an XMC carrier equipped with an M.2 site, used for either storage (OS boot, for example) or more add-in I/O.

Besides acting as a traditional OpenVPX 'slot 1 controller', the VPX450 server blade can be used as part of a compute cluster system, with each Phoenix blade providing 34,330 PassMark performance (Feb 2018).

Inter-card communication via the 68 PCIe connections can be used to create a High Performance Cluster Computing (HPCC) system via Symmetric Multiprocessing (SMP) for data mining, augmented/virtual reality, or block chain computation.

Featured products

Product Spotlight

Upcoming Events

No events found.
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier