Tuesday, 16 September 2025
28.5 C
Singapore
28.3 C
Thailand
19.6 C
Indonesia
26.3 C
Philippines

NVIDIA Blackwell platform sets new performance benchmark in MLPerf Inference v5.0

NVIDIA’s GB200 NVL72 sets a new benchmark in MLPerf Inference v5.0 with 30x token throughput, leading AI factory performance.

In the latest MLPerf Inference v5.0 results, NVIDIA has once again established itself as a leader in AI inference performance. The benchmark results highlight significant advances in speed and efficiency, particularly with the debut of the NVIDIA GB200 NVL72 system — a rack-scale platform designed to meet the growing demands of AI factories.

AI factories, unlike traditional data centres, are engineered to transform raw data into real-time intelligence. They rely heavily on advanced infrastructure to process massive and complex AI models, which often contain billions or even trillions of parameters. These increasingly sophisticated models raise the computational demands and cost per token, making it essential to optimise across the entire tech stack — from hardware and networking to software.

GB200 NVL72 delivers unmatched performance on large language models

The GB200 NVL72 system connects 72 Blackwell GPUs into a unified massive GPU, delivering up to 30 times the token throughput on the new Llama 3.1 405B benchmark compared to the previous H200 NVL8 configuration. This performance gain is driven by more than triple the performance per GPU and a ninefold increase in NVLink domain size. Notably, NVIDIA and its partners were the only ones to submit results for the Llama 3.1 405B benchmark, underlining the company’s leadership in tackling the most demanding inference workloads.

In real-world AI applications, two key latency metrics affect user experience: time to first token (TTFT) and time per output token (TPOT). To better reflect these constraints, the Llama 2 70B Interactive benchmark was introduced, requiring five times faster TPOT and 4.4 times lower TTFT than the original. On this more stringent benchmark, a DGX B200 system powered by eight Blackwell GPUs achieved triple the performance of a system using eight H200 GPUs.

These achievements show how the Blackwell platform, combined with NVIDIA’s full software stack, can dramatically boost inference performance. This not only enables faster AI responses but also supports higher throughput and more cost-effective deployments for AI factories.

NVIDIA Blackwell platform sets new performance benchmark in MLPerf Inference v5.0 - 1

Hopper platform continues to show growth and versatility

While Blackwell took centre stage, NVIDIA’s Hopper architecture also showed strong results in this round of testing. Since the release of MLPerf Inference v4.0 a year ago, H100 GPU throughput on the Llama 2 70B benchmark has increased by 1.5 times. The H200 GPU, which builds on Hopper with expanded and faster memory, raised this further to 1.6 times. Hopper successfully ran every test in the benchmark suite, including the new Llama 3.1 405B and Llama 2 70B Interactive models, demonstrating its flexibility and relevance for a broad range of workloads.

One standout result came from a B200-based system achieving over 59,000 tokens per second on the Llama 2 70B Interactive benchmark — tripling the throughput of the H200 configuration. Similarly, on the Llama 3.1 405B test, the GB200 NVL72 system pushed performance up to 13,886 tokens per second, marking a 30x increase compared to previous results.

NVIDIA Blackwell platform sets new performance benchmark in MLPerf Inference v5.0 - 2

AI ecosystem support and inference software advances

This round of MLPerf testing saw participation from 15 NVIDIA partners, including major industry names like ASUS, Dell Technologies, Google Cloud, Lenovo, and Oracle Cloud Infrastructure. These wide-ranging submissions reflect the availability and scalability of the NVIDIA platform across global server manufacturers and cloud providers.

NVIDIA also highlighted the role of its AI inference software, including the new NVIDIA Dynamo platform. Dynamo serves as an “operating system” for AI factories, offering smart routing, GPU planning, and KV-cache offloading for large-scale inference operations. The software enables innovations such as disaggregated serving — separating the prefill and decode stages of model execution — which allows for independent optimisation of performance and cost.

With tools like TensorRT-LLM and support for DeepSeek-R1 models, NVIDIA continues to expand inference capabilities without compromising accuracy. A notable result was the successful deployment of the first FP4 DeepSeek-R1 model on the DGX B200, which achieved high performance and accuracy with over 250 TPS per user at low latency.

NVIDIA’s continued dominance in MLPerf benchmarks underscores its focus on delivering full-stack AI performance. As larger models become the norm, and the need for high-speed, cost-efficient inference rises, NVIDIA’s Blackwell and Hopper platforms, backed by a strong partner ecosystem, are set to power the next generation of AI factories worldwide.

Hot this week

Cloudera named leader in IDC APAC MarketScape for unified AI platforms

Cloudera has been named a Leader in the IDC APAC MarketScape 2025 for unified AI platforms, recognised for governance, security, and innovation.

JFrog launches AppTrust to strengthen software release governance

JFrog introduces AppTrust, a governance solution that improves compliance, security, and trust in enterprise software releases.

Coursera launches Skill Tracks to address workplace skill gaps

Coursera launches Skill Tracks to help organisations close skill gaps with role-based, data-driven learning across IT, data, software, and GenAI.

China launches anti-dumping probe into US analogue chip suppliers

China launches anti-dumping probe into US analogue chip imports, boosting prospects for domestic chipmakers amid rising demand.

NVIDIA Blackwell Ultra sets new benchmark in MLPerf inference tests

NVIDIA’s Blackwell Ultra architecture sets new records in MLPerf Inference v5.1, boosting AI performance and reducing costs for enterprises.

Biwin unveils Mini SSD, a tiny storage device that could replace microSD cards

Biwin launches Mini SSD, a tiny yet powerful storage device that could replace microSD cards if industry standards are adopted.

Apple brings major upgrades to Powerbeats Pro 2 with iOS 26

Apple adds heart rate, fitness, and smart usability upgrades to Powerbeats Pro 2 with iOS 26, launching on 15 September.

UltraGreen.ai secures US$188 million anchor investment at US$1.3 billion valuation

UltraGreen.ai secures US$188 million anchor investment led by 65EP, Vitruvian, and August, valuing the firm at US$1.3 billion.

ConnectingDNA launches AI-powered DNA wellness marketplace in Singapore

ConnectingDNA launches the world’s first AI-powered DNA wellness marketplace in Singapore, offering personalised health insights and secure data protection.

Related Articles

Popular Categories