Accelerator speeds deployment for deep learning models
Authorised distributor Mouser Electronics is now stocking the GroqCard Accelerator from BittWare.
This double-width PCIe form factor ML accelerator provides easy deployment paths for PyTorch, TensorFlow, and ONNX-trained deep learning models.
The GroqCard Accelerator is a versatile choice for accelerating artificial intelligence (AI), HPC, and machine learning (ML) workloads for financial, government, generative AI, energy, and industrial applications.
The BittWare GroqCard Accelerator features nine RealScale chip-to-chip connections to ensure multiple cards can be deployed as efficiently as one. The accelerator provides near-linear, multi-server, and multi-rack scalability without external switches.
Error-correction code (ECC) protects against data corruption for improved uptime and reliability.
The BittWare GroqCard Accelerator features the fully deterministic GroqChip processor for enhanced scalability.
By reducing data movement, the GroqChip guarantees predictable, bottleneck-free, low-latency performance. The standalone chip allows for flexible integration into compute-intensive applications, while its simplified architecture and software-first focus make the GroqChip processor easier to program than a GPU.
The GroqWare Suite, a comprehensive and versatile software stack, is composed of the Groq Compiler, Groq API, and utilities. The suite simplifies the integration and setup process and is compatible with industry-standard AI and ML frameworks.
The GroqFlow Tool Chain, included in the GroqWare Suite, enables a single line of Pytorch or TensorFlow code to import and transform existing models. The BittWare GroqCard Accelerator offers up to 750 TOPs and 188 TFLOPs at a frequency of 900 MHz for INT8 and FP16 operations. The Accelerator features 230 MBytes of SRAM per chip and up to 80TB/s on-die memory bandwidth to enable high-speed data access and processing.