Inside CAE: How a Feasible Plan Gets Built in Under Two Seconds
A walk through the four phases that turn a satellite ID and a workload into an execution plan that respects power, thermal, and every orbital window.
You make a request. You get back a plan.
Somewhere between those two HTTP packets, a satellite gets propagated through ninety minutes of its orbit, an eclipse gets predicted to the second, three ground station passes get found and scored, and a multi-step workload gets dropped onto a timeline in a way that respects every constraint orbital mechanics imposes. Under two seconds, end to end.
This post walks that path.
We described the formal version in a paper on arXiv earlier this year. What follows is the engineering view of the same system: the same four phases, the same orbital state, but seen from inside the worker that runs them in production. Nothing here is new. It is what already happens every time you call the API.
What goes in, what comes out
The smallest useful CAE request looks like this:
curl https://api.rotastellar.com/v1/plan \
-H 'content-type: application/json' \
-d '{
"satellite": "OC-LEO-7",
"preset": "onboard-ml-inference"
}'
The response is a JSON document with a step-by-step execution plan, the time windows each step may run in, the power budget allocated to each, the data transfers scheduled across ground passes, and a feasibility flag at the top. If feasibility is false, the response lists every reason. If it is true, you can hand the plan to the agent and watch it execute.
Four phases run between the request and the response.
Phase 1. Orbital environment
The first thing CAE does is figure out what the next ninety minutes look like for the satellite.
It pulls the TLE from the KV cache (refreshed every thirty minutes from CelesTrak), then runs SGP4 across the planning horizon at one-second resolution. That gives a position-and-velocity table. From the position table, three things drop out cheaply.
Eclipse windows. Geometry: when the line from the satellite to the sun is occluded by Earth’s umbra, the satellite has no solar input. Those minutes are battery-only and the planner treats them as such for every downstream decision.
Ground station passes. For each registered station, we compute when the satellite rises above the configured elevation mask. Pass durations vary from about four minutes for low-elevation grazes up to roughly twelve minutes for overhead transits at LEO altitudes near 550 km. Each pass gets a quality score derived from the elevation profile and an expected RF link budget.
Thermal envelope. Sun-facing time, eclipse-facing time, and recent power draw history get rolled into a coarse thermal forecast. This is not a finite-element model. It is enough to refuse plans that would obviously cook a payload.
The output of this phase is a single time-indexed environment table. Everything else just reads from it.
Phase 2. Compute placement
Now the workload comes in.
It arrived as either a preset (one of five canned DAGs we ship: onboard ML inference, eclipse-aware training, periodic Earth observation, opportunistic compute, and a low-power telemetry batch) or as a custom_job, which is your own step DAG validated against the same schema. Either way, the planner sees a directed acyclic graph of steps, each tagged with resource requirements: compute class, memory, energy per execution, and expected output volume.
For each step the planner asks one question. Should this run on the satellite or on the ground?
The answer depends on the data reduction ratio of the step relative to the cost of moving its inputs and outputs across the boundary. Inference that turns 2 GB of raw imagery into 10 MB of bounding boxes wants to run onboard. Model training over a multi-week dataset belongs on the ground. The interesting cases are the ones in the middle, where the right answer flips depending on which orbit you are in and how busy the next pass is.
Placement is also where capacity gets checked. A step that needs 340 W peak cannot run during an eclipse if the battery floor would be violated. A step that needs the GPU cannot run while another step is using the GPU. We check, we tag, we move on.
Phase 3. Data transfer planning
At this point every step has a location, but the data still has to get there.
For every edge in the DAG that crosses the space-ground boundary, the planner schedules a transfer. Each transfer needs a window long enough to move the data, a link with enough margin for the volume and the encoding overhead, and a slot in the dependency order that does not deadlock the next step.
The math is mostly arithmetic. A 200 MB intermediate over a link at 50 Mbit/s with a Reed-Solomon overhead of about 12 percent needs a little over thirty-six seconds of pass time. Add a margin and round up. If no single pass has the room, the transfer gets split across two passes, and the planner verifies that the receiving side can hold the partial.
The non-obvious part of this phase is when CAE refuses. If the cumulative downlink demand of the plan exceeds the available pass time over the planning horizon, the plan is infeasible and the response says so, with the specific numbers. We would rather refuse upfront than promise a plan that quietly fails forty minutes in.
Phase 4. Scheduling
By the time we reach the last phase, the problem is almost over-determined.
Topological order on the DAG gives a legal sequence. Within that sequence, each step gets placed in the earliest feasible window that satisfies its constraints: dependency completion, available power, thermal headroom, the eclipse policy declared by the workload (pause, continue, or refuse), and any operator-defined exclusion windows.
The output is the plan you receive: timestamped steps, allocated power per step, the transfers that must happen before each step, and a per-window status. If everything fits, the response carries feasibility: true and you are done. If anything does not fit, the planner records what failed and where, and surfaces it in the response so you can adjust the workload or the constraints and try again.
A planned orbit for a five-step `onboard-ml-inference` preset against an LEO satellite at 550 km. The eclipse band is the third of the orbit where solar input drops to zero. The two passes near the start and end are the only windows where ground transfers are possible. Inference runs through the eclipse on battery; the downlink waits for the second pass.
Where the two seconds go
We track per-phase timing on every request, and the shape of the breakdown is consistent across most workloads.
Phase 1 dominates, because SGP4 and pass solving touch the most data. Phases 2 and 3 are roughly equal. Phase 4 is the fastest, because by the time we reach it the constraints have already eliminated most of the timeline. The rest of the budget is JSON serialization, KV reads, and network.
Warm workers come back well under one and a half seconds. Cold starts push closer to two. Custom jobs with twenty-plus steps cost more, but the curve stays sub-linear because the same orbital environment table gets reused across every placement and every transfer decision.
The honest answer for why it is fast: most of the work is reading from a precomputed orbital state, not computing it. The thirty-minute cron does the expensive part. The request path mostly does lookups and arithmetic.
Three ways to call it
- The API directly. Point cURL or any HTTP client at
https://api.rotastellar.com/v1/plan. The schema is in the docs. - The Python SDK.
pip install rotastellar, thenclient.cae.plan(satellite="...", preset="..."). Returns the same plan object as a typed dataclass. - The live tracker. Pick any satellite, switch to the Schedule tab, choose a preset, hit Plan Execution. The plan you see is the same plan the API would return.
Custom workloads work everywhere. Replace preset with custom_job and pass your own step DAG. The validator and the four phases do exactly what they do for presets.
The arXiv paper is the canonical theoretical writeup. This post is the engineering one. If you want to see the system from the satellite side, the Operator Protocol that the agent implements is open source at github.com/rotastellar/rotastellar-agent.