AMD joins PyTorch Foundation to promote AI and ML capabilities
AMD has announced it is joining the newly created PyTorch Foundation as a founding member.
The foundation, which will be part of the non-profit Linux Foundation, will drive adoption of Artificial Intelligence (AI) tooling by fostering and sustaining an ecosystem of open source projects with PyTorch, the Machine Learning (ML) software framework originally created and fostered by Meta.
As a founding member, AMD joins others in the industry to prioritise the continued growth of PyTorch’s vibrant community. Supported by innovations such as the AMD ROCm open software platform, AMD Instinct accelerators, Adaptive SoCs and CPUs, AMD will help the PyTorch Foundation by working to democratise state-of-the-art tools, libraries and other components to make these ML innovations accessible to everyone.
“Open software is critical to advancing HPC, AI and ML research, and we’re ready to bring our experience with open software platforms and innovation to the PyTorch foundation,” said Brad McCredie, Corporate Vice President, Data Centre and Accelerated Processing, AMD. “AMD Instinct accelerators and ROCm software power important HPC and ML sites around the world, from exascale supercomputers at research labs to major cloud deployments showcasing the convergence of HPC and AI/ML. Together with other foundation members, we will support the acceleration of science and research that can make a dramatic impact on the world.”
“We are excited to have AMD join the PyTorch Foundation and bring its extensive expertise in HPC, AI and ML to our members,” said Santosh Janardhan, VP, Infrastructure at Meta. “AMD has continued to support PyTorch with its integration on ROCm open software platform and has worked extensively with the open-source community and other foundation members to advance performance of ML and AI workloads. The collaborative support offered by AMD continues our engagement across broad industry initiatives for global impact.”
AMD, advancing AI and ML
AMD’s broad product and software portfolio helps its customers and partners develop and deploy applications with multiple forms of AI, from the cloud and enterprise to the edge and endpoints.
With a diverse set of hardware including AMD Instinct and Alveo accelerators, adaptive SoCs and CPUs, AMD can support a wide variety of pervasive AI and ML models, from small edge points to large scale out training and inference workloads.
AMD also works extensively with the AI open community to promote and extend machine and deep learning capabilities and optimisations. Vitis AI provides a comprehensive AI inference development platform for AMD adaptive SoCs and Alveo data centre accelerators.
Vitis AI plugs into common software developer tools and utilises a rich set of optimised open-source libraries to empower software developers with machine learning acceleration as part of their software code.
The ROCm open software platform is constantly evolving to meet the needs of the AI/ML and HPC community. With the latest release of ROCm 5.0, developers have access to turn-key AI framework containers on AMD Infinity Hub, advanced tools, streamlined installation, and can expect to experience reduced kernel launch times and faster application performance. As well, with the latest PyTorch 1.12 release, AMD ROCm support has moved from beta to stable.