Harvard's Kempner Institute Expands AI Supercomputer with 500+ GPUs, Boosting Academic AI Research Power
March 12, 2026
The upgrade enables large-scale and small-scale AI projects to run concurrently without interrupting ongoing work, boosting researchers’ ability to train and test large AI models.
The new hardware mix brings 424 H200 GPUs and 192 RTX PRO 6000 Blackwell GPUs, adding to an existing cluster that already includes 144 A100 and 384 H100 units.
Harvard's Kempner Institute is expanding its AI supercomputer with more than 500 NVIDIA GPUs to augment its current system.
The upgraded cluster delivers 1.79 exaFLOPS of performance and uses a heterogeneous GPU network linked by an optimized InfiniBand fabric to support workloads from large language and multimodal models to physics, neuroscience, and simulations.
RTX PRO 6000 Blackwell GPUs enable advanced optical and physics-based simulations, ray tracing, and efficient training with low-precision formats that aid model quantization and reduce memory use.
The Kempner Institute frames the upgrade as a landmark step toward redefining academic AI capabilities by providing unprecedented compute power and research flexibility.
When the upgrade is completed in Spring 2026, the expanded cluster will total 1,144 GPUs, placing it among the world’s exaFLOP-scale AI systems.
Researchers expect developments in world-models, long-reasoning AI agents, and industry-scale performance for foundational models within an academic setting.
Executive Director Elise Porter notes the expanded capacity will support simultaneous large-scale and smaller projects, reducing the need to halt other research for single initiatives.
Summary based on 1 source
Get a daily email with more AI stories
Source

AIwire • Mar 12, 2026
Kempner Institute at Harvard Announces Major Expansion of AI Supercomputing Cluster