Country
United States
AI data center dossier
Country
United States
Operator
Lawrence Livermore National Laboratory, Hewlett Packard Enterprise, AMD
Energy
Mixed
Known capacity
40 MW
Evidence profile
Readiness reflects whether the record has citations, narrative context, structured power data, coordinates, and at least one dated milestone.
Readiness
71%
No external citations yet
37.6888, -121.7063
Announcement and delivery timing still absent
HTML, JSON, and GeoJSON all available
Machine-readable outputs
Canonical record surfaces for audit, programmatic use, and direct citation.
El Capitan, deployed at Lawrence Livermore National Laboratory (LLNL) in Livermore, California, is the world's fastest supercomputer as of late 2024, achieving 1.742 exaflops of peak HPL (High Performance Linpack) performance. It dethroned Frontier from the top of the TOP500 list and represents the apex of the US Department of Energy's exascale computing initiative.
**Hardware**: El Capitan is built on the next-generation AMD Instinct MI300A APU (Accelerated Processing Unit) — a groundbreaking architecture that integrates CPU cores and GPU compute dies with HBM3 memory onto a single package. This unified memory architecture is particularly well-suited for AI workloads that require frequent data movement between CPU and GPU. The system has 11,328 compute nodes, each with 4 MI300A APUs, totaling 45,312 APUs.
**Performance**: At 1.742 exaflops HPL (double precision), El Capitan is roughly 1.5x faster than Frontier. In AI (mixed precision) workloads, the MI300A's architecture allows even higher throughput, potentially exceeding 10 exaflops in FP8 precision — the format most commonly used for transformer inference.
**Power**: El Capitan draws approximately 40 megawatts at full load, achieving a performance-per-watt ratio significantly better than first-generation exascale systems. The facility uses direct liquid cooling throughout the compute nodes.
**Mission**: Unlike Frontier (open science) and Aurora (DOE Office of Science focus), El Capitan is primarily classified — its primary mission is nuclear stockpile stewardship. LLNL simulates nuclear weapon performance to ensure the US arsenal remains safe and reliable without live testing. However, El Capitan's architecture makes it directly applicable to large-scale AI model training, and LLNL researchers use it for unclassified AI work including climate modeling and materials discovery.
**Cost**: El Capitan cost approximately $600 million under the DOE's CORAL-2 (Collaboration of Oak Ridge, Argonne, and Livermore) procurement program, which also covered Frontier and Aurora.
**AMD's dominance**: El Capitan, like Frontier, runs on AMD architecture. The MI300A APU used here was AMD's first heterogeneous CPU+GPU die — a product that also anchors AMD's entry into the consumer AI PC market as the MI300A desktop variant. El Capitan's deployment validates AMD's APU architecture at the highest level of scientific computing.
**LLNL operator structure**: LLNL is a national security laboratory managed by Lawrence Livermore National Security LLC — a consortium including Bechtel, BWXT, and UC — for the National Nuclear Security Administration (NNSA), a semi-autonomous agency within the DOE.
No dated milestones are published for this facility yet.
Other tracked AI data centers within 300 km of this location.
Structured analysis covering this facility's operator and market context.