TechInsights releases analysis of NVIDIA Blackwell
TechInsights has released early-stage findings of its teardown analysis of the NVIDIA Blackwell HGX B200 platform delivering advanced artificial intelligence (AI) and high-performance computing (HPC) performance in the data centre.
TechInsights has reported that SK hynix is the high-bandwidth memory (HBM3E) supplier and the GB100 graphics processing unit (GPU) implements TSMC’s latest advanced packaging architecture.
“Our analysts, technicians, and engineers have already identified and captured images of some of the sought-after innovations within the accelerators GB100 GPUs,” said Cameron McKnight-McNeil, Process Analyst, TechInsights. “The Blackwell product line is the world’s most advanced chipset that NVIDIA developed for the ‘generative AI’ era.”
The GB100 features eight HBMs co-packaged with two reticles worth of TSMC silicon. TechInsights has confirmed that NVIDIA uses SK hynix’s latest extended HBM (HBM3E) in the GB100 GPU. Each of the eight HBM packages features eight memory dies stacked in a true 3D configuration – with a separate controller die under the memory stack.
The maximum 192GB HBM specification of the GB100, divided by 64 DRAM per die across eight stacks, means each DRAM die’s capacity is 3GB. That represents an increase in per-die capacity of 50% over the previous generation of HBM. More detailed analysis continues, including node identifications of these new memory dies and the supporting controller die.
The GB100 GPU is NVIDIA’s latest generation accelerator, promising significant performance gains over the previous generation Hopper devices. The accelerators comprise two GPU dies built on a TSMC 4 nanometer (nm) process node. This represents a near doubling of the GPU die area versus Hopper and notably influences the package housing the dies.
The GB100 also includes the first instance of TSMC’s chip-on-wafer-on-substrate (CoWoS) local area silicon (-L) bridge die packaging technology (also known as CoWoS-L). The TechInsights team will continue to analyse this new packaging technology and is working on an in-depth report detailing the interconnect and packaging of the GB100 GPU.
Launched in March 2024, NVIDIA’s HGX B200 is a server board that links eight GB100 GPUs through NVLink to support x86-based generative AI platforms. HGX B200 supports networking speeds up to 400Gb/s through the NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking platforms. Launched in 2024, the GB100 is NVIDIA’s first GPU to use multiple processor dies in a single package.