El Capitan Supercomputer — Lawrence Livermore National Laboratory
Livermore, California, United States
El Capitan, deployed at Lawrence Livermore National Laboratory (LLNL) in Livermore, California, is the world’s fastest supercomputer as of late 2024, achieving 1.742 exaflops of peak HPL (High Performance Linpack) performance. It dethroned Frontier from the top of the TOP500 list and represents the apex of the US Department of Energy’s exascale computing initiative.
Hardware: El Capitan is built on the next-generation AMD Instinct MI300A APU (Accelerated Processing Unit) — a groundbreaking architecture that integrates CPU cores and GPU compute dies with HBM3 memory onto a single package. This unified memory architecture is particularly well-suited for AI workloads that require frequent data movement between CPU and GPU. The system has 11,328 compute nodes, each with 4 MI300A APUs, totaling 45,312 APUs.
Performance: At 1.742 exaflops HPL (double precision), El Capitan is roughly 1.5x faster than Frontier. In AI (mixed precision) workloads, the MI300A’s architecture allows even higher throughput, potentially exceeding 10 exaflops in FP8 precision — the format most commonly used for transformer inference.
Power: El Capitan draws approximately 40 megawatts at full load, achieving a performance-per-watt ratio significantly better than first-generation exascale systems. The facility uses direct liquid cooling throughout the compute nodes.
Mission: Unlike Frontier (open science) and Aurora (DOE Office of Science focus), El Capitan is primarily classified — its primary mission is nuclear stockpile stewardship. LLNL simulates nuclear weapon performance to ensure the US arsenal remains safe and reliable without live testing. However, El Capitan’s architecture makes it directly applicable to large-scale AI model training, and LLNL researchers use it for unclassified AI work including climate modeling and materials discovery.
Cost: El Capitan cost approximately $600 million under the DOE’s CORAL-2 (Collaboration of Oak Ridge, Argonne, and Livermore) procurement program, which also covered Frontier and Aurora.
AMD’s dominance: El Capitan, like Frontier, runs on AMD architecture. The MI300A APU used here was AMD’s first heterogeneous CPU+GPU die — a product that also anchors AMD’s entry into the consumer AI PC market as the MI300A desktop variant. El Capitan’s deployment validates AMD’s APU architecture at the highest level of scientific computing.
LLNL operator structure: LLNL is a national security laboratory managed by Lawrence Livermore National Security LLC — a consortium including Bechtel, BWXT, and UC — for the National Nuclear Security Administration (NNSA), a semi-autonomous agency within the DOE.
Reference Metadata
- Country
- United States
- Region
- California
- City
- Livermore
- Status
- Operational
Browse This Facility Through The Index
Entry pages are the atomic records in the index. Use the JSON endpoint for programmatic access, then move through the taxonomy links to the broader operator, country, energy, and timeline views that contextualize this facility.
Related Facilities
Nearby slices in the index that share the same operator, geography, build phase, or AI profile. Use these to move laterally through the buildout instead of jumping back to the full directory.
Frontier Supercomputer — Oak Ridge National Laboratory
Oak Ridge, United States
Aurora Supercomputer — Argonne National Laboratory
Lemont, United States
KAUST — Shaheen III Supercomputer
Thuwal, Saudi Arabia
Green Mountain AI Data Center (Rjukan)
Rjukan, Norway