Skip to content
Tools · Apr 22, 2026

NVIDIA Details Infrastructure Behind Latest OpenAI Models and Benchmarks

OpenAI's GPT-5.2 and GPT-5.3 Codex were trained and deployed on NVIDIA's Hopper and GB200 systems, with published benchmark results and claims of performance gains over prior generations.

Trust52
HypeSome hype

1 source · cross-referenced

ShareXLinkedInEmail
TL;DR
  • OpenAI released GPT-5.2 in December and GPT-5.3 Codex in February, both trained on NVIDIA infrastructure including Hopper and GB200 NVL72 systems.
  • GPT-5.2 achieved top scores on GPQA-Diamond, AIME 2025, and Tau2 Telecom benchmarks, while GPT-5.3 Codex set new highs on SWE-Bench Pro and Terminal-Bench.
  • NVIDIA claims GB200 NVL72 delivers 3x faster training than Hopper on MLPerf benchmarks and nearly 2x better performance per dollar.
  • The post highlights support for multiple AI modalities beyond language models, including genetic sequencing, protein structure prediction, and video generation models like Runway's Gen-4.5.
  • NVIDIA Blackwell is available through major cloud providers including AWS, Google Cloud, Microsoft Azure, and others.

OpenAI's latest model releases represent a continuation of dependence on specialized GPU infrastructure. GPT-5.2, described by OpenAI as their most capable model for professional knowledge work, was trained and deployed using NVIDIA Hopper and GB200 NVL72 systems. The more recent GPT-5.3 Codex, an agentic coding model released in February, ran entirely on GB200 hardware. Both models have published benchmark results: GPT-5.2 achieved reported top scores on GPQA-Diamond, AIME 2025, and Tau2 Telecom, while GPT-5.3 Codex set new highs on SWE-Bench Pro and Terminal-Bench evaluations.

NVIDIA's comparison metrics show measurable generational improvements in its own hardware stack. According to the company's MLPerf Training submission, GB200 NVL72 systems demonstrated 3x faster training performance than Hopper on the largest tested models, alongside claims of nearly 2x better performance per dollar. NVIDIA's upcoming GB300 platform shows further gains, claiming more than 4x speedup versus Hopper. These claims rest on standardized industry benchmarks, though they reflect NVIDIA's own testing environment.

The infrastructure landscape extends beyond language models. NVIDIA highlighted support for models across multiple modalities: genetic sequence analysis (Evo 2), protein structure prediction (OpenFold3), drug interaction simulation (Boltz-2), and medical imaging synthesis through its Clara platform. Video generation models from Runway, including Gen-4.5 (positioned as the top-rated model on Artificial Analysis leaderboards), were trained entirely on NVIDIA GPUs. A separate Runway announcement describes GWM-1, a general world model also trained on Blackwell hardware.

NVIDIA Blackwell availability spans major cloud infrastructure providers. The company lists AWS, Google Cloud, Microsoft Azure, CoreWeave, Lambda, Oracle Cloud Infrastructure, and Together AI as offering Blackwell-powered instances. This distribution reflects both competitive pressure in cloud AI infrastructure and the centrality of NVIDIA's hardware to current model training workflows.

Sources
  1. 01NVIDIA — Deep Learning BlogAs AI Grows More Complex, Model Builders Rely on NVIDIA
Also on Tools

Stories may contain errors. Dispatch is assembled with AI assistance and curated by human editors; despite the trust-score filter, mistakes happen. We correct publicly — every article links to its revision history. Nothing here is financial, legal, or medical advice. Verify before relying on any claim.

© 2026 Dispatch. No ads. No sponsorships. No paid placement. Reader-supported via Ko-fi.

Built by a person who cares about honest AI news.