NVIDIA GH200 Grace Hopper Superchip

The breakthrough design for giant-scale AI and HPC applications.

Higher Performance and Faster Memory—Massive Bandwidth for Compute Efficiency

The NVIDIA GH200 Grace Hopper Superchip is a breakthrough processor designed from the ground up for giant-scale AI and high-performance computing (HPC) applications. The superchip delivers up to 10X higher performance for applications running terabytes of data, enabling scientists and researchers to reach unprecedented solutions for the world’s most complex problems.

Take a Closer Look at the Superchip

NVIDIA GH200 Grace Hopper Superchip

The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace and Hopper architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications. With 900 gigabytes per second (GB/s) of coherent interface, the superchip is 7X faster than PCIe Gen5. And with HBM3 and HBM3e GPU memory, it supercharges accelerated computing and generative AI. GH200 runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, the HPC SDK, and Omniverse

The Dual GH200 Grace Hopper Superchip fully connects two GH200 Superchips with NVLink and delivers up to 3.5x more GPU memory capacity and 3x more bandwidth than H100 in a single server.

GH200 is currently available.

NVIDIA GH200 NVL2

The NVIDIA GH200 NVL2 fully connects two GH200 Superchips with NVLink, delivering up to 288GB of high-bandwidth memory, 10 terabytes per second (TB/s) of memory bandwidth, and 1.2TB of fast memory. Available today, the GH200 NVL2 offers up to 3.5X more GPU memory capacity and 3X more bandwidth than the NVIDIA H100 Tensor Core GPU in a single server for compute- and memory-intensive workloads.

Explore LaunchPad Labs with GH200

Accelerate Computing and AI With Grace Hopper

In this demo, you’ll experience seamless integration of the NVIDIA GH200 Grace Hopper Superchip with NVIDIA’s software stacks. It includes interactive demos and real-world applications and case studies, including large language model (LLM).

Explore Grace Hopper Reference Design for Modern Data Center Workloads

NVIDIA MGX With GH200

for AI training, inference, 5G, and HPC.

NVIDIA GH200 Grace Hopper Superchip
NVIDIA BlueField®-3
OEM-defined input/output (IO) and fourth-generation
NVLink

NVIDIA provides in-depth support for NVIDIA Grace with performance-tuning guides, developer tools, and libraries.