It’s been a wild few weeks.

SpaceX filed with the FCC for up to a million orbital data center satellites. Starcloud followed with plans for 88,000. Axiom launched its first two nodes in January. Blue Origin announced TeraWave. China laid out a 200,000-satellite constellation. Google’s “Project Suncatcher” is targeting 2027.

And just this week, TechCrunch ran the headline: “Why the economics of orbital AI are so brutal.”

If you’re watching this space (no pun intended), you’d be forgiven for thinking the big question is can we get hardware to orbit economically? That’s what most of the coverage focuses on - launch costs, per-watt economics, the Deutsche Bank estimate that we won’t reach cost parity until the 2030s.

That’s a real question. But it’s not the hardest one.

The hardest question is: once you get thousands of compute nodes into orbit, how do you actually run them?

The hardware race is real - and it’s accelerating

Let’s give credit where it’s due. The progress in the last 18 months has been remarkable.

Starcloud put NVIDIA H100s in orbit and trained an LLM in space. That’s not a press release - it’s a real proof point. Axiom’s nodes are operational on the ISS. Launch costs continue to fall. The physics works.

But here’s what nobody’s writing about: every one of these announcements describes the hardware going up. Almost none of them describe the software that makes it work at scale.

And “at scale” is the operative phrase. Running a single GPU node in a controlled ISS environment is one thing. Orchestrating thousands of autonomous compute nodes, each subject to orbital mechanics, eclipse cycles, radiation events, thermal constraints, and intermittent ground contact - that’s a fundamentally different problem.

Why orbital compute needs its own software stack

Terrestrial data centers have had decades to build their software infrastructure. Kubernetes, load balancers, health checks, auto-scaling - we take this for granted. The assumptions baked into that entire stack are: constant power, stable temperatures, permanent network connectivity, and predictable hardware failures.

In orbit, none of those assumptions hold.

Interactive Orbital Eclipse Cycle: Why Power Isn't Constant

A satellite in low-Earth orbit spends roughly a third of each ~90-minute cycle in Earth's shadow. No sunlight means no solar power - and the software must decide what to do with running workloads before it happens.

This is the most basic example. A single satellite, a single orbit. Now multiply it by ten thousand nodes, each in a different orbital plane, each hitting eclipse at a different time, each with different thermal profiles and different workloads. The orchestration problem is staggering - and no terrestrial scheduler was designed for it.

Five problems nobody’s solving (yet)

We’ve spent the last year and a half building software for exactly this environment. Here’s what we’ve learned about the problems that matter.

1. Eclipse-aware workload scheduling

Every satellite in low-Earth orbit goes dark for roughly 30 minutes of every 90-minute cycle. That’s not a bug - it’s orbital mechanics. But it means the software layer has to predict eclipse windows, checkpoint running jobs before power drops, decide what to migrate and what to pause, and resume cleanly on the other side.

Kubernetes doesn’t do this. No existing orchestrator does. You need a scheduler that understands orbital mechanics natively.

2. Thermal-aware compute allocation

Here’s something counterintuitive: in space, the hard problem isn’t cold. It’s heat. There’s no air to carry heat away. Radiative cooling is the only option, and it’s slow.

A GPU running at full load in orbit heats up fast. The software has to throttle workloads based on real-time thermal telemetry, shift compute to cooler nodes, and predict thermal states minutes into the future. This isn’t an afterthought - it’s a core scheduling constraint.

3. Radiation-tolerant fault handling

Cosmic rays flip bits. It happens constantly in LEO. Hardware mitigation helps (ECC memory, radiation-hardened chips), but it doesn’t eliminate the problem. The software layer needs continuous integrity checking, graceful degradation paths, and the ability to reconstruct state from distributed checkpoints.

This is fundamentally different from terrestrial fault tolerance, where the assumption is that hardware fails rarely and discretely. In orbit, hardware is constantly under attack from the environment.

4. Ground station coordination

An orbital data center isn’t always reachable. Ground contact windows are limited - often just minutes per pass. The software needs to queue telemetry, batch results, prioritize uplink/downlink, and operate autonomously between contacts.

Starlink has solved this for networking. Nobody has solved it for compute orchestration. The satellite needs to make intelligent scheduling decisions on its own, with ground input only when available.

5. Multi-node constellation orchestration

This is where it all comes together. A constellation of thousands of nodes, each in a different orbital plane, each with its own eclipse schedule, thermal state, radiation exposure, and ground contact windows. The orchestration layer needs to:

  • Distribute workloads across nodes based on real-time constraints
  • Migrate jobs between nodes as conditions change
  • Maintain global state across a constellation with intermittent connectivity
  • Handle nodes going offline (planned or not) without losing work

No existing distributed systems framework was designed for this environment.

The gap is bigger than it looks

Here’s what makes this frustrating: the industry conversation is stuck on “can we?” when it should be moving to “how do we?”

Yes, we can put GPUs in orbit. Starcloud proved that. Yes, the economics are challenging but improving. Yes, launch costs are falling.

But the software infrastructure that turns a constellation of satellites into a reliable, autonomous compute platform? That doesn’t exist yet. Not from SpaceX. Not from Starcloud. Not from any of the companies filing FCC applications.

And you can’t bolt it on later. The software architecture for orbital compute needs to be designed from the ground up for this environment. Trying to adapt terrestrial tools to orbital constraints is how you get systems that work in demos and fail in production.

What we’re building

At RotaStellar, this is the problem we work on every day.

Our platform sits in that middle layer - the orbital middleware that connects hardware to applications. We build the tools for eclipse-aware scheduling, thermal-aware resource allocation, radiation-tolerant state management, and autonomous constellation orchestration.

We’re not launching satellites. We’re building the software that makes them useful.

Some of this is available now. Our Planning Tools let teams model orbital constraints before committing to hardware. Our Orbital Intelligence suite handles conjunction analysis and space domain awareness at scale. And our SDKs - Python, Node.js, and Rust - give developers direct access to orbital compute primitives.

It’s early. But the companies filing for million-satellite constellations are going to need software like this. The question isn’t whether - it’s when.

Where this goes next

The next twelve months will be telling. We expect to see:

  • The first real multi-node orchestration attempts - and the first public failures. Running one satellite is not running a hundred.
  • A software stack race that mirrors the hardware race. The companies that figure out orbital middleware first will have a significant moat.
  • Convergence between simulation and operations. You can’t test constellation software in orbit at $50M per launch. You need high-fidelity simulation that maps to real orbital environments - and that simulation needs to be backed by real physics, not approximations.

The gold rush is real. The hardware will go up. But the software is where the value will ultimately concentrate - because the software is what turns a constellation of expensive satellites into a functioning compute platform.

That’s the bet we’re making.


If you’re building for this future, we’d like to talk. Request early access to the platform, or schedule a demo to see what we’ve built so far.