Country
United States
AI data center dossier
Country
United States
Operator
NVIDIA
Energy
Mixed
Known capacity
35 MW
Evidence profile
Readiness reflects whether the record has citations, narrative context, structured power data, coordinates, and at least one dated milestone.
Readiness
71%
No external citations yet
37.3688, -121.9886
Announcement and delivery timing still absent
HTML, JSON, and GeoJSON all available
Machine-readable outputs
Canonical record surfaces for audit, programmatic use, and direct citation.
Eos is NVIDIA's internal AI supercomputer, located at the company's Santa Clara, California headquarters campus. When unveiled in 2023, it was the world's fastest AI supercomputer for commercial/industrial use, delivering 18.4 exaflops of AI performance (FP8). It serves as NVIDIA's primary research and development cluster for training its own AI models and testing new GPU generations before they reach customers.
**Hardware**: Eos is built on 576 NVIDIA DGX H100 systems, each containing 8 H100 SXM5 GPUs. The total system has 4,608 H100 GPUs interconnected via NVLink and NVSwitch for intra-node communication and NVIDIA Quantum-2 InfiniBand HDR at 400 Gb/s for inter-node fabric. The storage subsystem runs on NVIDIA's Magnum IO software stack with high-bandwidth Lustre/VAST data platforms.
**AI training focus**: Eos is NVIDIA's proving ground. Every major NVIDIA AI product — CUDA libraries, cuDNN, TensorRT, Megatron-LM, NeMo — gets tested and optimized here at scale before release. NVIDIA's own Megatron-Turing NLG, internal diffusion models, and large-scale recommendation system experiments run on Eos. The cluster has been used to train models with hundreds of billions of parameters, providing the benchmarks NVIDIA publishes for H100 performance claims.
**Why it matters**: Eos is unique in that NVIDIA both builds the hardware and uses it at the cutting edge. The company's engineering teams dog-food every software stack on Eos before customer delivery. This gives NVIDIA's software teams insight into the real-world bottlenecks at hyperscale that customers will encounter — an advantage competitors without proprietary clusters cannot replicate.
**Power**: At approximately 35 megawatts, Eos draws significant power from the Santa Clara city grid. NVIDIA has committed to 100% renewable electricity for its US operations, though the actual matching is done through renewable energy certificates (RECs) and power purchase agreements rather than on-site generation.
**Eos vs. DGX Cloud**: Eos is NVIDIA's internal research cluster, distinct from DGX Cloud (NVIDIA's cloud service selling H100/H200 GPU access through Oracle Cloud, Microsoft Azure, and Google Cloud). Eos gives NVIDIA's internal teams priority access to compute that DGX Cloud customers also receive — but NVIDIA's researchers use Eos ahead of general availability windows.
**Successor**: NVIDIA's next internal cluster, expected to deploy B200/GB200 (Blackwell) architecture GPUs at scale, would substantially exceed Eos's performance. NVIDIA has described building a "gigawatt AI factory" — likely referencing a future cluster or campus expansion well beyond Eos's current 35 MW.
No dated milestones are published for this facility yet.
Other tracked AI data centers within 300 km of this location.
Structured analysis covering this facility's operator and market context.