We Shipped the First Orbital Compute Scheduler. Here's How It Works.
An API that takes a satellite ID and a workload, and returns a physically-accurate execution plan — computed from real orbital mechanics.
Today we’re shipping Constraint-Aware Execution (CAE) — an API that answers a question nobody else can: given my satellite, my workload, and my constraints, where should each step run, how does data move between space and ground, and what happens when things go wrong?
You can try it right now. Open the satellite tracker, select any satellite, click the Schedule tab, pick a workload, and hit Plan Execution. You’ll get a physically-accurate execution plan computed from real orbital mechanics — not a simulation, not a mockup.
The problem we’re solving
Satellites generate far more data than they can downlink. An Earth observation satellite captures roughly 1 TB per day, but a typical 10-minute ground station pass at X-band can transfer about 7.5 GB. That’s a 130:1 mismatch.
The traditional answer is “downlink everything and process on the ground.” That answer is breaking.
As on-board compute gets more capable — NVIDIA Jetson modules, FPGAs, even custom ASICs flying on newer satellites — the real question shifts from can we compute in orbit? to what should we compute where?
Some processing makes sense on-board: anything that dramatically reduces data volume before downlink. ML inference that turns 2 GB of raw imagery into 10 MB of detections. Feature extraction that compresses data 40:1 before sending it down.
Other processing needs ground resources: model training with large datasets, aggregation across multiple satellites, anything requiring persistent storage.
The hard part is the boundary. When a workload spans both space and ground, you need to:
- Decide placement — which steps run on-board vs. ground, considering power, thermal limits, and data reduction ratios
- Schedule transfers — allocate data across ground station passes that may be minutes apart, with varying link quality
- Handle errors — forward error correction, retransmission budgets, delivery confidence calculations
- Manage security — encryption overhead, key exchanges during limited contact windows
Nobody offers an API that does all of this from real orbital mechanics. Until now.
How CAE works
CAE takes two inputs: a satellite ID and a workload preset (or custom workload definition). It returns a complete execution plan.
Under the hood, the system runs through four phases:
Phase 1: Orbital environment. We fetch the satellite’s TLE data and run full SGP4 propagation at 30-second timesteps over a 12-hour prediction window. From this, we compute eclipse windows using a cylindrical shadow model (sun-earth-satellite geometry), and ground station passes for our 12-station global network using real elevation angle computation.
Each pass gets a link budget: data rate varies from 25 Mbps at low elevation to 120 Mbps at high elevation (X-band), with BER estimates that drive error correction selection.
Phase 2: Compute placement. For each step in the workload, the planner compares the cost of running it on-board (energy, thermal headroom, orbital window time) against the cost of transferring data to and from the ground. A key heuristic: if a step achieves greater than 10:1 data reduction, it’s almost always cheaper to run on-board — the transfer savings dominate.
Phase 3: Transfer insertion. The planner walks the dependency graph and inserts transfer segments wherever a step on-board feeds a step on ground (downlink) or vice versa (uplink). Each transfer is allocated across available passes with FEC overhead (rate 1/2 to 7/8 depending on channel quality), encryption overhead (AES-256), and retransmission reserves.
Phase 4: Window scheduling. On-board steps are scheduled into orbital windows respecting power and thermal constraints. Transfer steps are pinned to pass windows. Ground steps run immediately after their input data arrives. The result is a complete timeline with dependencies resolved across the space-ground boundary.
Five workload presets
CAE ships with five presets that represent real orbital compute patterns:
On-Board ML Inference — The simplest case. Capture 2 GB of imagery, run preprocessing and inference on-board, downlink only the 10.5 MB of results. A 190:1 data reduction means minimal downlink. All compute stays on the satellite.
Split Learning — Distributed training across the space-ground boundary. Feature extraction (layers 1-3) runs on-board, reducing 2 GB to 35 MB. Features are downlinked, backend training happens on the ground, then updated model weights (5.25 MB) are uploaded back. Bidirectional transfer scheduling.
Earth Observation with QA — A multi-pass downlink challenge. 5 GB of raw imagery goes through on-board quality assurance (discard bad frames), cloud filtering, compression, FEC encoding, and encryption before downlink. The 560 MB final payload typically needs 2-3 ground station passes.
Federated Learning — Privacy-preserving by design. Only sparse gradients (3.7 MB) leave the satellite — never raw data. Ground aggregation produces updated model weights (5.8 MB) that are uploaded back. The critical insight: the transfer budget is tiny because raw data never moves.
Resilient Store-and-Forward — Reed-Solomon erasure coding (rate 2/3) for error-resilient relay. Any 2-of-3 encoded blocks can reconstruct the original data. Requires two separate pass windows: receive uplink, then transmit downlink on a later pass.
Real physics, not toy models
Every plan is computed from real orbital mechanics:
- SGP4 propagation via satellite.js — the same algorithm used by NORAD for orbital prediction
- Cylindrical shadow model for eclipse detection — accurate to ~10 seconds for scheduling purposes
- 12 ground stations with real lat/lon coordinates — Svalbard, Fairbanks, Santiago, Awarua, and 8 more
- Link budget per pass — free space path loss, elevation-dependent data rates, BER estimation
- Deterministic fault model — same inputs always produce the same plan. BER-based, not random
When you run a plan for the ISS with the split-learning preset, you get real passes: downlink 52.82 MB via Awarua at rate 3/4 FEC, uplink 7.55 MB on the next pass. 99% delivery confidence with 14 MB of FEC overhead. These numbers come from actual orbital geometry, not hand-waved estimates.
Try it now
The CAE API is live. The satellite tracker has a new Schedule tab where you can run plans against any tracked satellite and see the results — including colored orbit segments and transfer arcs on the 3D globe.
For API access:
POST https://rotastellar-cae.subhadip-mitra.workers.dev/v1/plan
Content-Type: application/json
{
"satellite_id": "25544",
"preset_id": "split-learning"
}
The response includes placement decisions, transfer schedules, error budgets, security analysis, and a full execution event timeline.
This is the first piece of the Orbital Runtime stack that’s running against real satellites with real physics. Not a simulator — a production API that computes physically-accurate execution plans.
We’re building the missing software layer for orbital compute. CAE is step one.