Thursday, 21 August 2025
25 C
Singapore
28.7 C
Thailand
19.2 C
Indonesia
28.2 C
Philippines

ASUS unveils next-generation AI POD infrastructure with NVIDIA at Computex 2025

ASUS unveils new AI POD designs with NVIDIA at Computex 2025, boosting AI scalability, efficiency, and real-time agentic AI deployment.

At Computex 2025 in Taipei, ASUS announced the launch of its new AI POD infrastructure, designed with validated reference architectures under the NVIDIA Enterprise AI Factory. The newly unveiled solutions are built to support the adoption of agentic AI systems and high-performance computing (HPC), offering flexibility for both air-cooled and liquid-cooled data centre deployments.

Available as NVIDIA-Certified Systems across Grace Blackwell, HGX, and MGX platforms, these innovations are engineered to help enterprises scale AI capabilities with enhanced performance, efficiency, and manageability.

High-density architecture for accelerated AI performance

ASUS’s AI POD design incorporates NVIDIA’s latest hardware to support scalable deployments of large AI models. The solution features NVIDIA GB200 and GB300 NVL72 racks, which support both liquid-cooled and air-cooled options.

The liquid-cooled setup enables a 576-GPU non-blocking cluster spread across eight racks, while the air-cooled configuration supports a 72-GPU rack. Each setup integrates NVIDIA Quantum InfiniBand or Spectrum-X Ethernet networking to deliver high throughput and low latency, setting a new standard for efficient AI infrastructure.

These reference architectures are tailored to help enterprise IT teams manage AI and HPC workloads by providing a consistent framework and accelerating deployment timelines with lower operational risk.

Scalable rack systems for complex workloads

ASUS has also introduced MGX-compliant rack designs featuring its ESC8000 series systems. These racks include dual Intel Xeon 6 processors and the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU, paired with NVIDIA’s latest ConnectX-8 SuperNIC, capable of speeds up to 800Gb/s. This configuration provides the flexibility needed for immersive workloads, large language model (LLM) processing, and complex 3D tasks.

For even more advanced needs, ASUS offers HGX-based architectures. The ASUS XA NB3I-E12 and ESC NB8-E11 systems come embedded with NVIDIA HGX B300 and B200, respectively. These provide high GPU density, robust thermal management, and support for both liquid and air cooling, making them suitable for AI fine-tuning, inference, and training workloads. The streamlined manufacturing and rack integration reduce total cost of ownership and simplify large-scale deployment.

Full-stack integration for AI Factory applications

ASUS’s infrastructure supports the growing trend of agentic AI, enabling the deployment of AI agents capable of autonomous decision-making. The systems integrate tightly with the NVIDIA AI Enterprise software platform and NVIDIA Omniverse, supporting real-time simulation and collaboration environments.

The end-to-end ecosystem includes high-speed networking and storage, such as the ASUS RS501A-E12-RS12U and VS320D series, certified by NVIDIA. These ensure seamless scalability for AI and HPC applications. Resource utilisation is further optimised with SLURM-based workload scheduling and NVIDIA UFM for fabric management in Quantum InfiniBand environments. Storage capabilities are enhanced through the WEKA Parallel File System and ASUS ProGuard SAN Storage, which provide the throughput and scalability needed for enterprise data.

To support enterprise customers through the entire deployment process, ASUS provides tools such as the ASUS Control Center (Data Center Edition) and the ASUS Infrastructure Deployment Center (AIDC). These tools simplify the development, orchestration, and scaling of AI models. L11 and L12-validated systems further offer reliability and assurance for enterprise-level deployments.

ASUS continues to position itself as a leading partner in enterprise AI infrastructure, offering solutions that combine flexibility, performance, and ease of integration — helping organisations accelerate their journey into the next generation of AI.

Hot this week

NTT DATA partners with Google Cloud to advance agentic AI and cloud modernisation

NTT DATA and Google Cloud partner to deliver AI-powered cloud solutions, targeting industry-specific modernisation and global adoption.

Honor set to launch Magic V Flip2 in China on 21 August

Honor will launch its new foldable smartphone, the Magic V Flip2, in China on 21 August with fashion-focused design features.

iPhone 17E expected to bring new design, Dynamic Island and A19 chip

Apple’s iPhone 17E could launch in early 2026 with a new design, Dynamic Island and the powerful A19 chip.

Most APAC CFOs say AI agents will drive revenue and reshape business models

Salesforce research shows 75% of APAC CFOs believe AI agents will boost revenue and reshape business models beyond cost savings.

White House launches official TikTok account despite ongoing ban threat

The White House has launched its first TikTok account despite the looming 17 September deadline that could see the app banned in the US.

MoneyMe partners with SEON to strengthen fraud prevention and credit decisioning

MoneyMe partners with SEON to boost fraud prevention and credit decisioning as it scales lending operations securely.

Sekiro: Shadows Die Twice to be adapted into anime on Crunchyroll in 2026

Sekiro: Shadows Die Twice will be adapted into a hand-drawn anime, Sekiro: No Defeat, streaming on Crunchyroll in 2026.

Meta introduces an AI dubbing tool for Instagram and Facebook videos

Meta rolls out an AI dubbing tool for Instagram and Facebook reels, starting with English-Spanish translations for eligible creators.

Google moves closer to nuclear power deal with Kairos and TVA

Google partners with TVA and Kairos Power on a new reactor in Tennessee, aiming to supply data centres with nuclear energy by 2030.

Related Articles

Popular Categories