Every few months, another headline announces that space data centers will revolutionize computing. The pitch is seductive: unlimited cooling capacity, abundant solar power, no real estate constraints. The numbers thrown around-“10x cheaper energy”-sound transformative.

But when you dig into the actual economics, the picture gets complicated. We’ve spent the last year building models to understand when orbital compute makes financial sense. The answer isn’t “always” or “never.” It’s “for specific workloads, under specific conditions, with specific architectures.”

Here’s what we’ve learned.

The cost equation most people get wrong

The standard argument for orbital data centers focuses on operational costs. Solar power is effectively free after capital expenditure. Radiative cooling eliminates the 30-40% of terrestrial data center energy that goes to thermal management. No land costs. No property taxes.

All true. But operational costs are only part of the picture.

The capital expenditure equation looks very different. Getting hardware to orbit costs $1,500-3,000/kg, depending on your launch provider and manifest timing. A rack-equivalent of compute hardware weighs around 200kg. That’s $300,000-600,000 just to get the equipment off Earth-before you account for the hardware itself, the spacecraft bus, power systems, thermal radiators, and communication links.

Then there’s the maintenance problem. When a terrestrial server fails, you swap it out. When orbital hardware fails, you’ve lost the asset. Mean time between failures becomes existential.

When we built our feasibility calculator, we found that the crossover point-where orbital compute becomes cheaper than terrestrial-depends critically on three variables that most analyses ignore.

Variable 1: Workload duration

Short-duration workloads almost never make sense in orbit. The capital recovery period is too long.

If you’re running a computation for 24 hours, the amortized launch cost dominates everything else. Our models show you need sustained utilization of 3+ years before orbital economics start working in your favor. For many AI training runs, that’s longer than the model will be relevant.

But here’s where it gets interesting. If you’re operating infrastructure-not running one-off jobs-the math changes. A satellite operator running continuous conjunction screening. A financial firm running persistent global surveillance of shipping patterns. A climate modeling consortium running decade-long simulations. For these use cases, the high upfront cost spreads across years of continuous operation.

The workloads where orbital compute makes sense are boring and persistent, not flashy and episodic.

Variable 2: Power density

Terrestrial data centers are increasingly constrained by power delivery infrastructure. Getting 50+ MW to a site requires significant grid investment and often years of permitting. In some regions, new data center construction has effectively stalled because power isn’t available.

Orbital systems don’t have this constraint in the same way. You can scale power by scaling solar array size. The relationship is more linear than terrestrial, where each additional MW of capacity hits increasing grid interconnection costs.

We modeled the crossover point for power-constrained scenarios. If your terrestrial alternative requires building new transmission infrastructure-adding $500M+ and 3-5 years to your timeline-orbital deployment can actually be faster and cheaper. This surprised us.

The implication: orbital compute isn’t competing with existing data centers. It’s competing with new data centers in power-constrained markets. That’s a different competitive landscape.

Variable 3: Latency tolerance

Here’s the variable that eliminates most workloads from consideration.

Orbital compute adds latency. Even in LEO, you’re looking at 20-40ms round-trip to ground. With inter-satellite links and realistic ground station coverage, end-to-end latency for a request-response pattern runs 50-150ms.

For interactive workloads, that’s disqualifying. No amount of cost savings makes a 150ms database query acceptable for a user-facing application.

But many high-value workloads aren’t interactive. Batch AI training. Large-scale simulation. Render farms. Scientific computing. Archival data processing. These workloads are latency-tolerant by nature. They submit jobs and wait hours or days for results.

When we segment the market by latency tolerance, a meaningful cluster of workloads emerges where orbital economics potentially work. Not a majority. But not negligible either.

The crossover map

Putting these variables together, we built what we call a “crossover map”-a visualization of which workload characteristics favor orbital vs. terrestrial deployment.

The sweet spot is:

  • Duration: 3+ years of sustained operation
  • Power: 10+ MW with constrained terrestrial alternatives
  • Latency: 500ms+ acceptable round-trip
  • Hardware refresh: Low dependency on cutting-edge silicon (which is hard to get to orbit quickly)

AI training clusters for large models actually hit several of these criteria. Long training runs. Massive power requirements. Latency-tolerant batch processing. The main issue is hardware refresh-training clusters need current-generation GPUs, and there’s a 12-24 month lag getting new silicon space-qualified and launched.

The workloads that score highest on our model tend to be infrastructure services: persistent data processing, continuous simulation, long-duration scientific computing. Less exciting than “ChatGPT in space,” but more economically viable.

What we’re not accounting for

Our models have significant uncertainty bounds, and we want to be honest about what we’re not capturing well.

Reliability at scale: We don’t have good data on how modern compute hardware performs over multi-year orbital deployments. The space heritage data is mostly from radiation-hardened systems, not commercial silicon.

Constellation economics: Operating one orbital compute node is different from operating a hundred. There may be economies of scale we’re not modeling.

Regulatory trajectory: Orbital spectrum and debris regulations are evolving. Future compliance costs are hard to predict.

Technology curves: Launch costs continue to drop. Satellite manufacturing is becoming more automated. The crossover points we’re calculating today will shift.

The honest conclusion

Orbital compute will not replace terrestrial data centers. The economics don’t support that for the vast majority of workloads.

But for a specific slice of high-power, long-duration, latency-tolerant computing-our models suggest orbital deployment becomes economically competitive within the 2027-2030 timeframe, assuming launch costs continue their current trajectory.

The opportunity isn’t to move everything to orbit. It’s to identify the workloads where orbital deployment is actually the rational choice, and build the tools to evaluate that decision rigorously.

That’s what we’re focused on. If you’re evaluating orbital compute for a specific use case, we’d like to help you run the numbers.


Our feasibility calculator incorporates these economic models. Request access to try it with your workload parameters.