Last year, the AI industry consumed an estimated 4.3 gigawatts of power globally. That’s roughly equivalent to the entire electricity consumption of Ireland. By 2027, that number is projected to triple. Training a single frontier model like GPT-4 required an estimated 50 GWh of electricity, releasing approximately 25,000 tons of CO2.

These numbers are getting hard to ignore. Microsoft, Google, and Amazon have all seen their carbon emissions rise despite aggressive renewable energy commitments, driven almost entirely by AI infrastructure growth. The industry is facing an uncomfortable question: can we continue scaling AI without cooking the planet?

The conventional answers - more efficient chips, better cooling, renewable energy - are necessary but insufficient. We’ve been exploring a more radical option: what if the greenest place to train AI models isn’t on Earth at all?

The physics case for orbital compute

Earth-based data centers fight physics every day. They consume enormous energy to remove heat that the equipment generates. They compete for land near power infrastructure. They depend on grid capacity that takes years to build.

Orbital infrastructure inverts these constraints.

Unlimited solar power. A solar array in orbit receives 1,361 W/m² of continuous solar flux. No atmosphere, no clouds, no night cycle in the right orbit. A terrestrial solar installation averages 200-300 W/m² after accounting for weather and day/night cycles. The raw energy availability is 4-7x higher per square meter of panel.

Free cooling. In the vacuum of space, heat radiates directly into the 2.7 Kelvin cosmic background. No chillers. No cooling towers. No water consumption. A properly designed spacecraft radiator can reject waste heat at efficiencies impossible on Earth. Terrestrial data centers spend 30-40% of their total energy on cooling. In orbit, that drops to near zero.

No grid constraints. The single biggest bottleneck for new data center construction isn’t money - it’s power interconnection. In Northern Virginia, the world’s largest data center market, new projects face 4-7 year waits for grid capacity. In orbit, you scale power by deploying more solar arrays. The constraint becomes launch capacity, not utility bureaucracy.

No water consumption. Terrestrial data centers consume billions of gallons of water annually for cooling. A hyperscale facility can use 5 million gallons per day. Orbital facilities use zero.

When we model these factors together, the theoretical energy efficiency advantage of orbital compute is substantial: 40-60% less total energy per computation compared to equivalent terrestrial infrastructure, with zero direct carbon emissions during operation.

The numbers that actually matter

Theory is nice. Let’s look at real numbers.

We modeled a hypothetical 100 MW orbital compute constellation against an equivalent terrestrial deployment. Here are the assumptions:

Terrestrial baseline:

  • Power: 100 MW IT load
  • PUE (Power Usage Effectiveness): 1.3 (industry-leading)
  • Total facility power: 130 MW
  • Grid carbon intensity: 400g CO2/kWh (US average)
  • Annual emissions: 456,000 tons CO2
  • Water consumption: 1.2 billion gallons/year

Orbital alternative:

  • Power: 100 MW IT load
  • PUE: 1.05 (minimal overhead for power conditioning)
  • Total power: 105 MW (all solar)
  • Operating emissions: 0
  • Water consumption: 0

The operating carbon savings are stark: 456,000 tons of CO2 per year avoided. Over a 7-year operational lifetime, that’s 3.2 million tons of CO2.

But operating emissions aren’t the full picture. What about the carbon cost of getting hardware to orbit?

The launch carbon question

Critics correctly point out that rocket launches aren’t carbon-free. A Falcon 9 launch burns approximately 400 tons of RP-1 kerosene, releasing roughly 1,200 tons of CO2.

Let’s do the math honestly.

Our 100 MW orbital constellation requires approximately 500 tons of hardware in orbit, including compute nodes, solar arrays, radiators, and spacecraft buses. At current launch costs, that’s roughly 25-30 Falcon 9 equivalent launches. Total launch carbon: approximately 30,000-36,000 tons of CO2.

Payback period: the orbital constellation offsets its launch carbon in about 25-30 days of operation. Over a 7-year lifetime, the launch carbon represents less than 1% of the avoided emissions.

Launch technology is evolving too. Methane-fueled rockets like Starship and New Glenn have lower carbon intensity. SpaceX is developing propellant production using renewable electricity. Within a decade, launch carbon may drop by 50-80%.

Distributed compute makes this possible

Here’s where it gets interesting for AI workloads specifically.

Orbital compute doesn’t work as a simple replacement for terrestrial data centers. The latency, bandwidth constraints, and operational complexity make it unsuitable for general-purpose cloud computing.

But AI training isn’t general-purpose computing. Large-scale model training is:

  • Embarrassingly parallel across data batches
  • Tolerant of latency (training runs take days to weeks anyway)
  • Bandwidth-constrained (gradient synchronization is already the bottleneck)
  • Power-hungry (exactly the workloads where orbital advantages are largest)

This is where federated learning and gradient compression become essential. Traditional distributed training assumes high-bandwidth, low-latency interconnects. That doesn’t work for Earth-space coordination.

But with aggressive gradient compression - 100x is achievable with minimal accuracy loss - you can train models across a hybrid Earth-space infrastructure where:

  • Earth nodes handle latency-sensitive operations and data ingestion
  • Orbital nodes handle power-intensive forward/backward passes
  • Compressed gradients synchronize during ground station passes
  • The overall training timeline extends modestly, but energy consumption drops dramatically

Our simulations show that a 70B parameter model trained with hybrid Earth-space infrastructure consumes 35-45% less total energy than pure terrestrial training, while adding only 15-20% to the total training time. For workloads where carbon footprint matters more than time-to-completion, that’s a compelling tradeoff.

Which workloads actually make sense

Not every AI workload should move to orbit. The economics favor specific characteristics:

Good fit:

  • Long training runs (weeks to months)
  • Large models where power dominates cost
  • Organizations with carbon commitments or carbon pricing exposure
  • Research workloads without hard deadlines
  • Inference workloads that can tolerate 100-200ms latency

Poor fit:

  • Real-time inference requiring sub-50ms latency
  • Short experiments and hyperparameter sweeps
  • Workloads requiring interactive debugging
  • Training runs with tight deadlines

The sweet spot is the large, long-running training jobs that dominate AI’s energy footprint. A single GPT-4 scale training run represents more carbon than thousands of smaller experiments. Targeting the big jobs yields disproportionate environmental impact.

What we’re not sure about yet

We want to be honest about the uncertainties.

Hardware lifecycle emissions. We haven’t fully accounted for the embodied carbon of space-qualified hardware, which may be higher than terrestrial equivalents due to additional testing and redundancy.

End-of-life. Responsible orbital operations require deorbiting hardware at end of life. The carbon cost isn’t zero, though it’s small compared to operational factors.

Constellation operations. Running an orbital compute constellation requires ground infrastructure - mission control, ground stations, network operations. We’ve assumed this overhead is modest but haven’t quantified it precisely.

Technology readiness. Large-scale orbital compute is not yet deployed. The numbers here are projections based on component-level performance, not measured system performance.

Where this is going

Orbital compute isn’t ready to train GPT-5 today. But the physics advantages are real, the economics are improving, and the environmental imperative is becoming urgent.

What’s needed:

  1. Demonstration missions proving space-qualified AI accelerators can operate reliably in orbit
  2. Federated learning frameworks optimized for high-latency, intermittent connectivity
  3. Gradient compression techniques that maintain training convergence at extreme compression ratios
  4. Hybrid scheduling systems that intelligently partition workloads between Earth and orbit

We’re working on all of these. Our distributed compute platform is designed from the ground up for Earth-space coordination, with gradient compression, adaptive synchronization, and fault tolerance built into the core architecture.

The goal isn’t to move all of AI to orbit. It’s to give organizations a choice: for workloads where sustainability matters, orbital compute could offer the lowest-carbon path to training frontier models.

The greenest electron is one you don’t have to generate. In orbit, the sun provides that electron for free.


Explore our distributed compute research, including gradient compression benchmarks and Earth-space coordination models, at /products/distributed-compute/. Try the interactive simulation at /products/distributed-compute/demo/.