Nvidia DGX Spark powers AI research from campus labs to Antarctica
Nvidia DGX Spark brings data centre-grade AI to universities, from Antarctica research to campus labs worldwide.
Leading universities around the world are deploying the Nvidia DGX Spark desktop supercomputer to bring data centre-grade artificial intelligence into local research environments. From campus laboratories to one of the most remote research stations on Earth, the compact system is being used to run advanced AI workloads without relying on cloud infrastructure.
The petaflop-class machine is designed to support large AI applications directly on site. Researchers are using it for tasks ranging from clinical report evaluation to robotics perception systems. By keeping data on premises, institutions can reduce latency, protect sensitive information and shorten development cycles for both researchers and students.
Each DGX Spark unit is powered by the Nvidia GB10 superchip and runs on the Nvidia DGX operating system. It supports AI models of up to 200 billion parameters and integrates with Nvidia’s NeMo, Metropolis, Holoscan and Isaac platforms. This enables students and faculty to access the same development tools used across Nvidia’s wider DGX ecosystem.
From the South Pole to medical labs
At the University of Wisconsin-Madison’s IceCube Neutrino Observatory in Antarctica, DGX Spark is being used to analyse data from experiments that study neutrinos, subatomic particles linked to some of the universe’s most extreme events. The observatory investigates phenomena such as supernovas and dark matter using particle-based detection methods that extend beyond traditional light-based astronomy.
Benedikt Riedel, computing director at the Wisconsin IceCube Particle Astrophysics Center, said, “There’s no hardware store in the South Pole, which is technically a desert, with relative humidity under 5% and an elevation of 10,000 feet, meaning very limited power. DGX Spark allows us to deploy AI in a compartmentalized and easy fashion, at low cost and in such an extremely remote environment, to run AI analyses locally on our neutrino observation data.” The system’s compact design and relatively low power requirements make it suitable for deployment in isolated environments with limited infrastructure.
In New York, researchers at NYU’s Global AI Frontier Lab are using DGX Spark to power the ICARE project, which stands for Interpretable and Clinically-Grounded Agent-Based Report Evaluation. The project uses collaborating AI agents and multiple-choice question generation to assess how closely AI-generated radiology reports match expert-authored references. By running the system locally, the team avoids transferring sensitive medical imaging data to the cloud.
Lucius Bynum, faculty fellow at the NYU Center for Data Science, said, “Being able to run powerful LLMs locally on the DGX Spark has completely changed my workflow. I have been able to focus my efforts on quickly iterating and improving the research tool I’m developing.” NYU researchers also use the system to run large language models for interactive causal modelling tools, which generate structured maps of cause-and-effect relationships between clinical variables and diagnoses. This enables rapid experimentation while maintaining data privacy.
Bridging desktops and large-scale clusters
At Harvard’s Kempner Institute for the Study of Natural and Artificial Intelligence, neuroscientists are using DGX Spark to examine how genetic mutations in the brain contribute to epilepsy. Led by co-director Bernardo Sabatini, the team is analysing around 6,000 mutations in excitatory and inhibitory neurons. They are building protein structure and neuronal function prediction maps to determine which variants should be tested in laboratory experiments.
DGX Spark serves as an intermediate platform between laboratory work and institutional GPU clusters. Researchers validate workflows and timing on a single unit before scaling successful pipelines to larger clusters for extensive protein screening. This approach reduces wait times and streamlines experimentation.
Arizona State University was among the first to deploy multiple DGX Spark systems. The machines now support research initiatives across memory care, transportation safety and sustainable energy. One group led by Yezhou “YZ” Yang, associate professor in the School of Computing and Augmented Intelligence, is applying the platform to advanced perception and robotics research, including AI-enabled robotic dogs for search and rescue and assistive tools for visually impaired users.
At Mississippi State University, DGX Spark is being used as a hands-on teaching platform for computer science and engineering students. Faculty are incorporating the system into applied AI research and workforce development efforts, encouraging practical experimentation within laboratory settings.
The University of Delaware has also adopted the platform through the ASUS Ascent GX10 system, which is powered by DGX Spark. Sunita Chandrasekaran, professor of computer and information sciences and director of the First State AI Institute, described the system as “transformative for research,” noting that it allows teams in areas such as sports analytics and coastal science to run large AI models on campus rather than depend on cloud services. Through the ASUS Virtual Lab programme, institutions can test GX10 performance remotely before deployment.
In Europe, the Institute of Science and Technology Austria is using an HP ZGX Nano AI Station, based on DGX Spark, to train and fine-tune large language models on a desktop system. Its open source LLMQ software supports models of up to 7 billion parameters. With 128GB of unified memory, the system can keep both the model and its training data on the same device, reducing the need for complex memory management and enabling faster experimentation.
At Stanford University, researchers are prototyping complete training and evaluation pipelines for biological agent workflows using DGX Spark before scaling them to larger GPU clusters. The team reported performance comparable to major cloud GPU instances, achieving about 80 tokens per second on a 120 billion-parameter gpt-oss model at MXFP4 via Ollama, while keeping the workload entirely on a desktop. The university will also feature DGX Spark units from ASUS at Treehacks, a global student hackathon taking place from 13 February to 15 February.





