Multiple companies have announced plans to deploy orbital data centers by 2027. Press releases promise megawatts of space-based compute, AI training in orbit, revolutionary economics. The headlines are exciting.

We’ve been modeling orbital compute feasibility for over a year. We’ve worked with teams planning actual deployments. And we have a more nuanced view of the 2027 timeline: some things will happen, but probably not the things grabbing headlines.

Here’s our analysis of what’s likely, what’s possible, and what’s probably not happening in 2027.

What the announcements claim

The orbital data center announcements share common themes:

  • Multi-megawatt power systems
  • Commercial-scale compute clusters (hundreds to thousands of GPUs)
  • AI training workloads
  • Dramatic cost advantages versus terrestrial alternatives
  • Deployment timelines of 18-24 months

Reading these, you’d think orbital data centers are imminent at scale. The reality is more complicated.

The credibility gradient

Not all orbital compute projects are equal. We see three tiers:

Tier 1: Technology demonstrations (likely to happen) Small-scale experiments with compute hardware in orbit. Single satellites or small clusters. Tens of kilowatts of power. Real hardware, real data, real learning-but not commercially meaningful scale.

Several of these are well underway and likely to fly in 2026-2027. They’ll generate important operational data and validate designs. But they’re not going to run production AI workloads.

Tier 2: Operational pilot systems (uncertain timing) Larger deployments with 100kW-1MW of compute capacity. Big enough to run meaningful workloads but not yet economically competitive with terrestrial alternatives. Aimed at proving the operational model, building experience, and identifying problems.

We think these are 2028-2029 events, not 2027. The hardware development cycles, launch manifests, and ground system buildout don’t support earlier deployment for the teams we’ve tracked.

Tier 3: Commercial-scale orbital data centers (not 2027) Multi-megawatt facilities competitive with terrestrial compute costs for production workloads. This is what the press releases imply but the fine print doesn’t support.

We’re skeptical any of this is operational before 2030. The gaps between current capability and commercial scale are too large to close in 18-24 months.

The specific technical gaps

Instead of hand-waving about timelines, let’s look at specific technical requirements and where they stand.

Power systems

Commercial-scale orbital compute needs megawatts of power. Current large satellites operate at 20-30kW. The ISS, the largest crewed spacecraft ever, has about 120kW of solar array capacity.

Scaling to megawatts requires:

  • Very large solar arrays (1MW needs ~3,000m² of array area)
  • High-power electrical systems (batteries, power distribution)
  • Deployment and structural systems for large flexible arrays

These technologies exist in labs and are being developed for lunar/Mars applications. But flight-qualified hardware at megawatt scale? Not yet. The development programs targeting this are on 3-5 year timelines from first hardware tests.

For 2027, we expect to see 100-200kW systems at most-impressive compared to current satellites, but an order of magnitude below commercial data center scale.

Compute hardware

You can’t just launch a terrestrial GPU and expect it to work. Space-qualified computing requires:

  • Radiation tolerance (single-event upsets, total ionizing dose)
  • Thermal qualification (vacuum thermal cycling, wide temperature ranges)
  • Vibration qualification (launch loads)
  • Extended qualification testing (typically 12-18 months)

The cutting-edge GPUs driving AI training today have no space heritage. By the time current chips complete space qualification, they’re two generations behind terrestrial state-of-art.

This is the fundamental hardware refresh problem we discussed in our economics analysis. Orbital compute will always run older silicon than terrestrial data centers. For some workloads that’s fine. For competitive AI training, it’s a real limitation.

We’ve seen approaches to mitigate this-radiation-tolerant architectures that enable faster qualification, designs that accept higher fault rates and compensate in software. But these are R&D efforts, not production-ready solutions.

Thermal management at scale

Our thermal simulations show that scaling from 10kW to 100kW to 1MW isn’t linear. Heat transport from concentrated compute hardware to distributed radiators becomes the binding constraint. Radiator pointing requirements conflict with solar array pointing requirements.

The thermal architectures that work at ISS scale (tens of kilowatts) don’t scale to megawatt orbital data centers. New thermal management approaches are needed-larger deployable radiators, advanced heat transport, active thermal control systems.

These are engineering problems, not physics problems. They’ll get solved. But they add development time and risk that the aggressive timelines don’t seem to account for.

Ground systems and operations

An orbital data center isn’t just hardware in space. It’s a complete system including:

  • Ground station networks for high-bandwidth connectivity
  • Mission operations centers
  • Job scheduling and workload management systems
  • Monitoring, diagnostics, and anomaly response
  • Customer interfaces and APIs

Building ground systems for a new class of space operations takes time. The teams we’ve spoken with are often well along on flight hardware but earlier on ground systems. This is a common pattern-the less visible infrastructure gets less attention until it becomes critical path.

What we expect to actually see in 2027

Our best guess at the 2027 orbital compute landscape:

  1. Multiple technology demonstrations with real compute hardware operating in orbit, generating valuable data about radiation effects, thermal behavior, and operational challenges. 10-50kW power scale.

  2. At least one larger-scale pilot in the 100kW range, probably more experiment than production system. Demonstrating that the architecture can work, not yet proving it’s economically viable.

  3. No production AI training workloads at meaningful scale. The hardware timing, power systems, and ground infrastructure won’t support competitive commercial operations.

  4. Growing commitment from cloud providers and aerospace companies, with announcements of larger programs targeting 2029-2030 deployment. The 2027 experiments will inform these bigger bets.

  5. Significant learning about what actually works in orbit versus what works on paper. Some approaches will prove out. Others won’t. The field will start to converge on viable architectures.

The milestones to watch

Rather than tracking press releases, here are the technical milestones that will tell you if orbital compute is getting real:

  • First 100kW+ solar array deployed and operational (proves power scaling)
  • First GPU cluster operating in orbit for 6+ months (proves compute hardware viability)
  • First thermal system managing 50+ kW in eclipse cycles (proves thermal design)
  • First commercial workload running in orbit (even a small one proves the operational model)

As of today, none of these milestones have been achieved. All are plausible in the 2027-2028 timeframe. When they happen, the path to commercial scale becomes much clearer.

Our role in this

RotaStellar isn’t building orbital data centers. We’re building tools that help the teams who are.

Our job is to model these systems rigorously-thermal behavior, latency characteristics, economic crossover points-so that development teams can make informed decisions. When someone asks “will our thermal design work?” we can simulate it. When someone asks “does this architecture make economic sense?” we can run the numbers.

We’re optimistic about orbital compute in the long run. The physics support it. The economics will eventually work for the right workloads. But we’re realistic about timelines. The 2027 announcements are better understood as the beginning of serious development than the arrival of a mature capability.

If you’re evaluating orbital compute-as an operator, investor, or potential customer-we’re happy to share what our models show. The reality is nuanced, but it’s also genuinely interesting.


Want to model orbital compute scenarios for your use cases? Request access to our planning tools.