The Stargate Project

Architectural Framework, Strategic Objectives, and Deployment Roadmap for Next-Generation AGI Infrastructure

Subject: Hyperscale AI Infrastructure and AGI Compute Optimization

1 Executive Summary

The Stargate Project is a multi-lateral initiative spearheaded by OpenAI and SoftBank, representing a $500 billion capital expenditure (CapEx) program dedicated to the engineering and deployment of the world's most advanced artificial intelligence infrastructure.

This whitepaper delineates the project's overarching goals, the high-performance computing (HPC) architecture required to achieve Artificial General Intelligence (AGI), and the phased timeline for achieving a 10-gigawatt (GW) compute footprint.

2 Strategic Objectives

The Stargate Project is predicated on the "Scaling Laws" of transformer-based architectures, which posit that model performance is a direct function of compute budget, dataset size, and parameter count.

  • AGI Realization: Provisioning the exascale compute necessary for the training of "Frontier Models" (e.g., GPT-5, GPT-6) and the execution of complex reasoning (System 2) tasks.
  • Infrastructure Sovereignty: Establishing a domestic, secure, and resilient supply chain for AI compute within the United States.
  • Energy-Compute Synergy: Pioneering "Energy-First" data center designs that integrate modular nuclear reactors (SMRs) and advanced geothermal energy directly into the compute fabric.

3 Technological Architecture

The Stargate infrastructure is characterized by a massive-scale distributed system, utilizing state-of-the-art interconnects and thermal management solutions.

3.1 Heterogeneous Compute Fabric

The core of the Stargate supercomputer utilizes a heterogeneous array of accelerators:

NVIDIA Blackwell (GB200) NVL72

Leveraging the 2nd Gen Transformer Engine and 5th Gen NVLink for intra-rack communication.

Custom Silicon (Arm-based)

Utilizing SoftBank/Arm-designed Neoverse V-series cores for high-efficiency control plane operations and data preprocessing.

3.2 Interconnect and Networking

To mitigate the "Communication Wall" in distributed training, Stargate employs:

InfiniBand NDR/XDR

Providing sub-microsecond latency and multi-terabit throughput for All-Reduce and All-to-All collectives.

Optical Circuit Switching (OCS)

Dynamically reconfiguring network topology to optimize for specific model parallelisms (Tensor, Pipeline, and Data Parallelism).

3.3 Thermal and Power Management

Given the projected 100kW+ per rack power density, Stargate implements:

Direct-to-Chip Liquid Cooling (DLC)

Utilizing high-flow coolant loops to maintain optimal junction temperatures (Tj) across high-TDP accelerators.

Gigawatt-Scale Power Distribution

Implementing high-voltage DC (HVDC) distribution to minimize conversion losses from the grid to the chip.

4 Deployment Roadmap and Timeline

The Stargate Project follows a rigorous 5-phase evolution, transitioning from pilot clusters to a unified global compute fabric.

5 Conclusion

The Stargate Project is not merely a data center expansion; it is the foundational layer for the next era of human intelligence. By converging hyperscale capital, cutting-edge semiconductor engineering, and revolutionary energy solutions, Stargate provides the necessary substrate for the emergence of Artificial General Intelligence.