NVIDIA CUDA-X AI and HPC software stack now available
Marvell announced the availability of NVIDIA GPU support on its ThunderX family of Arm-based server processors. Following NVIDIA’s June announcement to bring CUDA to the Arm architecture, Marvell has collaborated with NVIDIA to port its CUDA-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools to the ThunderX platform.
The computational performance and memory bandwidth of ThunderX2, Marvell’s latest 64-bit Armv8-A based server processor, combined with the parallel processing capabilities of NVIDIA GPUs provide a compelling path to energy-efficient exascale computing.
Artificial intelligence (AI) and machine learning (ML) continue to become essential technology components to data centre server requirements at the cloud and network edge. To address these evolving AI and ML workloads, as well as the most challenging and complex problems in science and research, supercomputers need processors that are optimised to provide cutting-edge throughput, application latency and power.
With an initial focus on computational science applications including GROMACS, NAMD, MILC and LAMMPS, the ThunderX2 configurations are demonstrating compelling performance with an enhanced ability to drive higher and more efficient combined application results in a GPU-enabled system.
“NVIDIA GPU support for our ThunderX2 server processor brings clear, differentiated value to meet the distinctive performance and power requirements of the exascale computing era,” said Gopal Hegde, Vice President and General Manager, Server Processor Business Unit at Marvell Semiconductor. “The availability of NVIDIA GPU acceleration on the Arm architecture will further expand the ThunderX2 ecosystem for HPC, cloud computing and edge markets, spurring innovation across low level firmware through system software to commercial ISV applications.”
“The availability of CUDA acceleration for ThunderX2 processors marks a significant milestone in bringing the power efficiency and high performance of the Arm architecture to the infrastructure market,” said Chris Bergey, Senior Vice President and General Manager, Infrastructure Line of Business at Arm. “The breadth and depth of innovation across the ecosystem enables an easy migration path and robust support for existing and future GPU workloads from the edge to the cloud.”
Ian Buck, General Manager and Vice President of Accelerated Computing at NVIDIA, added: “NVIDIA GPU-accelerated computing on Arm provides customers worldwide with greater choice in building next-gen AI-enabled supercomputers. Combining NVIDIA’s unmatched platform for AI and HPC with Marvell’s powerful ThunderX2 Arm-based server processors is already delivering impressive application performance.”
ThunderX2 is a widely supported Armv8-A server processor with an ecosystem of over 100 partners across commercial, open source and industry standards engagements. NVIDIA’s full software suite support is enabling the acceleration of more than 600 HPC applications and AI frameworks on ThunderX2 systems.
Steve Cooper, CEO at One Stop Systems, said: “Our collaboration with Marvell enables us to support servers with the industry-leading performance of ThunderX2 with our SC8000 compute acceleration expansion platform, bringing data centre AI capabilities to a host of edge applications. The SC8000 is the industry’s first solution that incorporates NVIDIA Tesla GPUs with NVLink and Arm servers. The addition of Arm-based architecture into our solutions extends the value of use cases for AI on the Fly edge appliances for our customers.”
“Red Hat and Marvell have a long history of collaborating in the Arm server ecosystem, helping to bring open, industry-wide standards to enterprise Arm architecture,” said Chris Wright, Senior Vice President and Chief Technology Officer at Red Hat. “Enabling NVIDIA GPUs on ThunderX-based systems paired with the CUDA-X SDK and libraries supports customer choice in terms of architecture for running HPC, AI and ML applications on top of Red Hat platforms.”