Can chiplets maintain the momentum of IC design?
When Rambus was founded 30 years ago, RAM cost $98 per megabyte, and the latest Intel processor, the 80486, had 1.2 million transistors and was fabricated on a 1µm process. In comparison, in September 2020, RAM cost $0.0028 per megabyte, and the latest Nvidia A100 AI processor has 54 billion transistors and is fabricated on a 7nm process. Gary Bronner, Senior Vice President of Rambus Labs, explains more.
Not only does this demonstrate how far the semiconductor industry has come in terms of processor and memory technology advancements in the past 30 years, but it reinforces the astronomical increase that has taken place in the number of transistors that can be built into an integrated circuit (IC). This increase has generally been in line with both Moore’s Law and Dennard Scaling.
In 1965, Moore’s Law observed that the number of transistors on a silicon chip would double every two years. This observation required advancements in technology which improved performance, power, and density, netting an ongoing reduction in cost per function. Almost ten years later, in 1974, in a now-famous technical paper for the IEEE Journal of Solid State Circuits, Dennard presented a path for the technology advancements needed to satisfy Moore’s Law.
Dennard observed that transistor dimensions could be scaled by -30% every technology generation, thus reducing their area by 50%. This would reduce circuit delays by 30% and subsequently increase operating frequency by about 40%. Finally, to keep the electric field constant, voltage is reduced by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%. Therefore, in every technology generation, if the transistor density doubles, the circuit becomes 40% faster, and power consumption (with twice the number of transistors) stays the same. In combination, these observations predicted that performance per watt would double every 18 months. Both observations have had a huge impact on the semiconductor industry - providing a roadmap for the industry that led to innovations and the continued advancements of modern day technologies.
Whilst there has been rapid progress over the past 30 years guided by these observations, with node sizes continuing to shrink and the ability now to deliver high-end devices on processes as small as 5nm, it is widely accepted that Moore’s Law and Dennard Scaling have broken down - and are no longer guides to the pace of continued progress. The difficulties that the semiconductor industry now faces with achieving smaller nodes can be seen in the lengthening time that it takes to deliver complex designs.
This is not to say that transistor counts in processors are not increasing - they are. However, the plateauing in performance per watt makes increasing operating frequencies extremely challenging, particularly as this creates more waste heat, and increases noise that is intolerable at the lower switching voltages necessary on smaller geometries.
As a result of these challenges, many processor manufacturers have adopted multi-core approaches to increase performance. However, a large die is more costly. This is in part due to the increased risk of flaws and the challenges of routing interconnects between multiple cores and their supporting memory. Peripheral device busses are also increasingly complex and costly.
To maintain the pace of progress, System on a Chip (SoC) manufacturers are turning to building systems, inside single packages, from several smaller die - known as chiplets. Chiplets are not a new concept, however, the model is becoming increasingly popular as the industry is forced to look for alternative solutions.
There are a number of SoC manufacturers that have acknowledged the use of chiplets in their designs already. For example, AMD announced its chiplet approach back in 2018 and the use of this architecture in its Zen 2 chips. TSMC also revealed in 2019 that it was beginning to prepare for chiplet style production in 2021 which will no doubt see the usage of chiplet-based designs surge. This surge is already being anticipated by the market. In a report released in May 2020 by Omdia, a market analyst firm, it predicted the chiplet market will grow from the $645m reported in 2018, to $1.81bn by 2022, with this trajectory set to be sustained.
What are the advantages of chiplets for SoC design?
Chiplets present solutions to the wider issues of monolithic SoC devices - that is, the challenges of practically fitting and routing a design into the available silicon area. Scaling designs down is not simple. The cost, and engineering time, required to move to smaller process nodes is growing more expensive and taking longer with every node migration.
Chiplets circumvent the need to scale all parts of a design to smaller nodes and offer higher process yield than a single large monolithic IC at a smaller node. Furthermore, by disaggregating IC functions, chiplets can reduce overall system complexity. By having multiple chiplets, the number of transistors in the package can continue to increase whilst the overall design time is reduced, subsequently returning to a quicker time to market.
Using pre-assembly probe testing on the chiplet components to identify any issues prior to assembly can further reduce manufacturing wastage. If a chiplet has a flaw at this point, then it is possible to discard it and swap it out at a lower cost than to discard a larger monolithic integrated device. In combination, these features of chiplets lower overall design risk and cost, compared to a large monolithic device.
Additionally, chiplets provide the ability to mix-and-match their function to the optimal process node. Logic benefits from being on the leading-edge process node, but mixed-signal elements can be more effectively implemented on larger geometries. Most importantly, chiplets allow SoCs to scale beyond the reticle limits which constrain the design of a large monolithic IC.
The challenges of chiplets
Whilst chiplets are gaining significant traction, with the market growth predicted to soar, and the benefits to advancing technology clear, they present a number of challenges that SoC designers, and the industry, need to overcome.
One of the biggest challenges that chiplets present is that they consume more power than a monolithic IC. This is because chiplets replace high-speed internal busses used in a monolithic IC with high-speed serial links to interconnect the chiplets. Even with very small distances between the SoC and the chiplets, the power consumption of these interconnects is higher. For SoC designers, careful consideration of thermal design and performance is essential to manage the power budget and heat dissipated.
Additionally, whilst it is possible to do pre-assembly probe testing on chiplet components before the system is built, once a chiplet architecture has been built a question that is raised is, do you test installed chiplets one at a time or test the system in its entirety? If there is an issue with one chiplet, it could ruin the whole unit. This is why testing after the chiplet has been assembled into an SoC - so that faults can be identified and chiplets can be replaced - is necessary to overcome this challenge.
Potential future security threats in high-value applications also need to be considered. By having an SoC that is divided into multiple chiplets, the attack surface is increased. Adversaries could exploit this to bypass safeguards to data and hardware. In high-value, high-security applications, every chiplet should incorporate anti-counterfeiting hardware solutions for authentication and anti-tamper protections to ensure a single ‘weak link’ doesn’t compromise the security of the entire SoC, for example by allowing ‘man in the middle’ attacks, or invalid firmware code to be loaded via a compromised chiplet.
As chiplets are relatively new, they have a limited track record of ‘time in the field’. For regulated sectors where a proven track record of high reliability is essential - such as the automotive industry and healthcare industry - chiplet technology is not yet at the requisite level of maturity. A whitepaper published by Cambridge Consultants in 2019 noted that the need for high reliability in such industries is due to the high regulatory requirements. Over time, such challenges can be overcome and this progress will undoubtedly be supported through chiplets’ growing prevalence in other fields.
The future of chiplets
As noted, chiplets are gaining traction and the semiconductor industry is seizing all of the possibilities that they present. Whilst there are challenges, chiplets offer great promise for meeting the demands of technologies such as AI/ML training, real-time inference, 5G base stations, and high-performance computing (HPC). Chiplets are already enabling use cases and workloads across such markets and with their low-cost and fast time to market, they are especially attractive for such rapidly evolving applications.
In the mid-term, there is potential for chiplet-based solutions such as providing more in-package memory by assembling an SoC with additional or higher capacity DRAM chiplets.
In the long-term, if interfacing standards can be agreed and proven, there is the potential for SoC designers to purchase off-the-shelf chiplets for integration into their designs. This will make chiplets more accessible to product designers who do not have the resources to develop them, whilst also paving the way to the mixing-and-matching of chiplets from various vendors. The Open Compute Project (OCP) subproject of the Open Domain-Specific Architecture (ODSA) is currently working on the standardisation of a chiplet-based architecture. It is also focusing on chiplet design exchange.
Chiplet technology is an exciting solution to help overcome the deceleration in pace that the industry has been experiencing due to the slowing of Moore’s Law and Dennard Scaling. Whilst the design and manufacturing challenges that chiplets present should not be underestimated, they offer the options and advantages necessary to continue the momentum of IC design.