Embedded MXM graphics on NVIDIA Turing architecture
ADLINK Technology has introduced an embedded MXM-based graphics modules based on NVIDIA Turing architecture, to accelerate edge AI inference in SWaP-constrained applications. GPUs are increasingly used to provide AI inferencing at the edge, where size, weight and power (SWaP) are key considerations.
The embedded MXM graphics modules offer high-compute power required to transform data at the edge into actionable intelligence, and come in a standard format for systems integrators, ISVs and OEMs, increasing choice in both power and performance.
“The new embedded MXM graphics modules provide the perfect balance between size, weight and power for edge applications, where the demand for more processing power continues to increase,” said Zane Tsai, Director of Platform Product Center, ADLINK. “Leveraging NVIDIA’s GPUs based on the Turing architecture, our customers can now increase their edge processing performance with ruggedised modules that are fit for any environment, while remaining inside their SWaP envelope.”
ADLINK’s embedded MXM graphics modules accelerate edge computing and edge AI in a myriad of compute-intensive applications, particularly in harsh or environmentally challenging applications such as those with limited or no ventilation, or corrosive environments. Examples include medical imaging, industrial automation, biometric access control, autonomous mobile robots, transportation and aerospace and defense. The need for high-performance, low-power GPU modules is increasingly critical as AI at the edge becomes more prevalent.
The ADLINK embedded MXM graphics modules:
- Provide acceleration with NVIDIA CUDA, Tensor and RT Cores.
- Are one-fifth the size of full-height, full-length PCI Express graphics cards.
- Offer more than three times the lifecycle of non-embedded graphics.
- Consume as low as 50W of power.