Sarcina Technology launches AI platform
Sarcina Technology announces the launch of its innovative AI platform to enable advanced AI packaging solutions that can be tailored to meet specific customer requirements.
Leveraging ASE’s FOCoS-CL (Fan-Out Chip-on-Substrate-Chip Last) assembly technology, this platform includes an interposer which supports chiplets using UCIe-A for die-to-die interconnects, allowing for the delivery of cost-effective, customisable solutions.
Sarcina Technology is on a mission to push the boundaries of AI computing system development by providing a platform that enables efficient, scalable, configurable, and cost-effective semiconductor packaging solutions for AI applications. As AI workloads continue to evolve, there is a need for increasingly sophisticated packaging solutions capable of supporting higher computational demands. Sarcina’s novel interposer packaging technology integrates leading memory solutions with high-efficiency interconnects. Whether prioritising cost, performance, or power-efficiency, Sarcina’s new AI platform can deliver.
According to Dr. Larry Zu, CEO of Sarcina Technology: "Six years ago, after prototyping a 2.5D silicon TSV interposer package that integrated one ASIC and two HBMs, we predicted this technology would enable highly complex compute solutions. Today, this vision is becoming a reality, driven by RDL die-to-die interconnects like UCIe."
Zu continues: "With FOCoS assembly technology, we are entering a new era of AI computing. Our AI platform offers greater efficiency and customisation, with the lowest cost in the industry for generative AI chips. This ensures that our customers stay competitive in the rapidly evolving AI landscape."
The Sarcina team has successfully developed an interposer with up to 64 bits of data interface per module, achieving data rates of up to 32GT/s. This delivers the highest UCIe-A performance in terms of both bandwidth and data rate, as specified by the UCIe 2.0 standard. To further enhance data transfer throughput, multiple modules can be arranged in parallel along the silicon die edge. There is also a choice between LPDDR5X/6 packaged memory chips and HBMs.
Sarcina has extensive expertise in designing high-power, high-performance semiconductor packages. This allows semiconductor startups to focus on developing efficient algorithms for GenAI and Edge AI training without the need for an expensive post-silicon design and manufacturing team. Startups can simply develop their silicon and pass it to Sarcina for post-silicon packaging, streamlining the process and reducing costs while maintaining high performance. Sarcina’s die-to-die interposer solution enables AI customers to use chiplets to form large silicon areas, supporting high-performance computing with satisfactory wafer yields. This large package design allows for the integration of more memory, which is crucial for GenAI applications that require rapid, parallel data processing.
Key features of the new Sarcina AI platform:
- Cost-effective chiplet design: a cost-efficient alternative to expensive SoC solutions
- Fast die-to-die interconnection with UCIe-A standard: up to 64-bit data interface per module and 32GT/s transmission speed per lane. Supports multi-module configurations, interconnect redundancy and side-band configurations as specified by UCIe 2.0 standards
- FOCoS-CL advanced packaging technology: a cost-effective replacement for costly 2.5D TSV (Through-Silicon Via) silicon interposer technology, as well as other expensive solutions like silicon bridge die with fan-out RDL interconnections
- LPDDR5X/6 and HBM options: provides superior memory bandwidth and efficiency to support a variety of AI workloads. LPDDR6 memory also incorporates 3D stacking technology similar to HBM, achieving data rates of approximately 10GT/s
- Scalable package size: supports package sizes up to 100 x 100mm, enabling scalability for diverse AI applications
- Power specifications: <500W for forced air cooling and up to 1,000W with liquid cooling, offering flexible deployment options
- Memory integration: supports up to 20 LPDDR5X/six memory chips or up to eight HBM3E chips, facilitating high-speed data processing for AI workloads
The launch of Sarcina’s AI platform is set to transform AI computing capabilities across industries such as autonomous systems, data centres, and scientific computing.
Sarcina Technology is also one of the leading providers of silicon photonic semiconductor packaging and testing solutions. Meet the team on booth 3019 at the Optical Fiber Conference (OFC) San Francisco, CA, USA (30th March-3rd April 2025).