At Computex 2025 in Taipei, ASUS announced the launch of its new AI POD infrastructure, designed with validated reference architectures under the NVIDIA Enterprise AI Factory. The newly unveiled solutions are built to support the adoption of agentic AI systems and high-performance computing (HPC), offering flexibility for both air-cooled and liquid-cooled data centre deployments.
Available as NVIDIA-Certified Systems across Grace Blackwell, HGX, and MGX platforms, these innovations are engineered to help enterprises scale AI capabilities with enhanced performance, efficiency, and manageability.
High-density architecture for accelerated AI performance
ASUS’s AI POD design incorporates NVIDIA’s latest hardware to support scalable deployments of large AI models. The solution features NVIDIA GB200 and GB300 NVL72 racks, which support both liquid-cooled and air-cooled options.
The liquid-cooled setup enables a 576-GPU non-blocking cluster spread across eight racks, while the air-cooled configuration supports a 72-GPU rack. Each setup integrates NVIDIA Quantum InfiniBand or Spectrum-X Ethernet networking to deliver high throughput and low latency, setting a new standard for efficient AI infrastructure.
These reference architectures are tailored to help enterprise IT teams manage AI and HPC workloads by providing a consistent framework and accelerating deployment timelines with lower operational risk.
Scalable rack systems for complex workloads
ASUS has also introduced MGX-compliant rack designs featuring its ESC8000 series systems. These racks include dual Intel Xeon 6 processors and the NVIDIA RTX PRO 6000 Blackwell Server Edition GPU, paired with NVIDIA’s latest ConnectX-8 SuperNIC, capable of speeds up to 800Gb/s. This configuration provides the flexibility needed for immersive workloads, large language model (LLM) processing, and complex 3D tasks.
For even more advanced needs, ASUS offers HGX-based architectures. The ASUS XA NB3I-E12 and ESC NB8-E11 systems come embedded with NVIDIA HGX B300 and B200, respectively. These provide high GPU density, robust thermal management, and support for both liquid and air cooling, making them suitable for AI fine-tuning, inference, and training workloads. The streamlined manufacturing and rack integration reduce total cost of ownership and simplify large-scale deployment.
Full-stack integration for AI Factory applications
ASUS’s infrastructure supports the growing trend of agentic AI, enabling the deployment of AI agents capable of autonomous decision-making. The systems integrate tightly with the NVIDIA AI Enterprise software platform and NVIDIA Omniverse, supporting real-time simulation and collaboration environments.
The end-to-end ecosystem includes high-speed networking and storage, such as the ASUS RS501A-E12-RS12U and VS320D series, certified by NVIDIA. These ensure seamless scalability for AI and HPC applications. Resource utilisation is further optimised with SLURM-based workload scheduling and NVIDIA UFM for fabric management in Quantum InfiniBand environments. Storage capabilities are enhanced through the WEKA Parallel File System and ASUS ProGuard SAN Storage, which provide the throughput and scalability needed for enterprise data.
To support enterprise customers through the entire deployment process, ASUS provides tools such as the ASUS Control Center (Data Center Edition) and the ASUS Infrastructure Deployment Center (AIDC). These tools simplify the development, orchestration, and scaling of AI models. L11 and L12-validated systems further offer reliability and assurance for enterprise-level deployments.
ASUS continues to position itself as a leading partner in enterprise AI infrastructure, offering solutions that combine flexibility, performance, and ease of integration — helping organisations accelerate their journey into the next generation of AI.