
Released in March 2022, the H100 Tensor Core GPU features 80 billion transistors on TSMC's 4nm process, delivering 3TB/s HBM3 memory bandwidth, fourth-generation Tensor Cores with FP8 precision, and 30x faster AI inference performance.
See market dominance
The GH100 GPU die is designed by Nvidia and manufactured by TSMC in Taiwan using advanced 4nm process technology. The massive 814mm² die contains 80 billion transistors and uses TSMC's CoWoS-S advanced packaging to integrate with HBM3 memory stacks.
H100 uses 80GB of HBM3 memory supplied by SK Hynix and Samsung, both manufactured in South Korea. The memory consists of five 16GB 8-Hi stacks delivering 3TB/s bandwidth through a 5120-bit memory interface.
H100 components are sourced globally: GH100 GPU from Taiwan (TSMC), HBM3 memory from South Korea (SK Hynix/Samsung), ABF substrates from Japan (Ibiden/Shinko Electric), power components from USA/Germany (Infineon/ON Semi), with final assembly by server OEMs worldwide.
H100 uses advanced ABF (Ajinomoto Build-up Film) package substrates primarily from Ibiden in Japan, with secondary suppliers including Shinko Electric, Unimicron, and Nan Ya PCB in Taiwan. The 14-16 layer FCBGA substrate is critical for high-speed signal integrity.
Key H100 suppliers include TSMC (GPU fabrication and CoWoS packaging), SK Hynix/Samsung (HBM3 memory), Ibiden (substrates), Monolithic Power Systems/Infineon (power delivery), Amphenol/Molex (NVLink connectors), and Boyd/Mikros (cooling solutions).