Delivering ultimate efficiency for next-gen AI SoCs
The introduction of the Orion AI has been announced by NetSpeed Systems, a System on Chip (SoC) interconnect solution targeted specifically for AI-enabled SoC applications.
The Orion AI includes advanced features such as multicast and broadcast to improve performance and efficiency in AI-enabled SoCs, accelerator ASICs used for datacentres, autonomous vehicles and AR/VR. Orion AI builds on NetSpeed’s silicon-proven Orion IP which has been licensed to AI companies including Horizon Robotics, Cambricon, Baidu and Esperanto.
Artificial Intelligence (AI) is making its way into many applications including vision, speech, forecasting, robotics and diagnostics. These emerging applications require a whole new level of processing capability and are driving sweeping changes in computational architectures and a shift in SOC design.
Sundari Mitra, CEO of NetSpeed, stated: “Inside these new SOCs there is a new data flow. Typically, there are a large number of compute elements which need to perform peer to peer data exchange rapidly and efficiently. Previous architectures operated differently, with processing units using a central memory as an interchange system. AI systems need ‘any-to-any’ data exchanges that benefit from wide interfaces and need to support long bursts. One of the key advantages that Orion AI brings is the ability to support many multicast requests and to support non-blocking transfers.”
Orion AI was designed to provide high performance, terabits of on-chip bandwidth, and has an underlying architecture that can support thousands of compute engines. It provides wide data paths with interfaces of up to 1024-bits, higher for internal structures, and can support long bursts of up to 4kb.
Orion AI is powered by NetSpeed’s Turing machine learning engine which uses supervised learning to explore and optimise SoC design and architecture.