Orbital Runtime
Execute Workloads
Across Earth and Orbit
Kubernetes doesn't understand orbital mechanics. PyTorch doesn't adapt to solar eclipses. TensorFlow doesn't expect bit flips. We're building the runtime that does.
Status: Simulation and research available now. Production runtime aligned with first orbital DC deployments (2027).
Workload Distribution in Real-Time
Watch how the runtime distributes workloads across orbital and terrestrial nodes based on energy, thermal, and connectivity state.
Three Primitives. New Foundations.
Orbit Scheduler
Workload orchestration that understands orbital mechanics, energy availability, and network topology. Kubernetes for Earth + orbit.
→ 02Adaptive Runtime
Inference and training that adapts to available energy, thermal headroom, and network conditions in real-time.
→ 03Resilient Compute
Fault-tolerant ML execution for radiation environments. Bit flips are expected, not exceptional.
→Orbit Scheduler
Where should this workload run? When? At what fidelity?
Orbit Scheduler makes placement decisions based on orbital position, ground station visibility, energy availability, and latency requirements. It's the missing orchestration layer for hybrid Earth-orbit compute.
Capabilities
- Orbit-aware placement (SGP4/SDP4 integrated)
- Energy-aware scheduling (solar flux, eclipse, battery)
- Latency-aware routing (ground stations, ISLs)
- Predictive handover (pre-migrate before windows close)
- Hybrid orchestration (span terrestrial + orbital)
Status
| Simulator | Q2 2026 |
| Production Runtime | 2027 |
Input:
workload: llm-inference
model: llama-70b
latency_sla: 200ms
priority: normal
Orbit Scheduler Decision:
# Current orbital state
orbital_node_1: solar_peak, 847W available
orbital_node_2: eclipse, battery_only, 340W
earth_us_west: available, 2.1kW
# Decision
route_to: orbital_node_1
reason: energy_surplus, latency_ok (34ms RTT)
fallback: earth_us_west
# Energy savings: 43% vs terrestrial
Adaptive Runtime
Inference that bends, not breaks, when power drops.
Standard ML runtimes fail when energy is constrained or thermal limits are reached. Adaptive Runtime dynamically adjusts precision, layer activation, and context length to deliver results within available resources.
Adaptation Strategies
- Precision scaling (FP16 → INT8 → INT4)
- Dynamic layer skipping
- Context window reduction
- Batch size adjustment (thermal-aware)
- Graceful degradation (approximate results)
Status
| Research Benchmarks | Q2 2026 |
| Production Runtime | 2027 |
# Standard runtime (fails)
result = model.generate(prompt)
# ERROR: Insufficient power during eclipse
# Adaptive Runtime
result = adaptive_runtime.generate(
prompt=prompt,
energy_budget=340, # Watts available
latency_sla=200, # ms
quality=QoS.BEST_EFFORT
)
# Automatically:
# - Reduced precision: FP16 → INT8
# - Skipped layers: 40-55 (non-critical)
# - Context: 8K → 4K tokens
# - Result: delivered in 187ms at 312W
Resilient Compute
When bit flips are normal, not exceptional.
In LEO, expect 10-100 single-event upsets per device per day. Standard ML frameworks have no concept of this. Resilient Compute detects corruption, bounds error propagation, and re-executes only what's needed.
Fault Tolerance Mechanisms
- Activation checksums at layer boundaries
- Selective re-execution (not full inference)
- Redundant execution for critical layers
- Uncertainty quantification on outputs
- Graceful corruption handling
Status
| Fault Injection Framework | Q2 2026 |
| Research Benchmarks | Q3 2026 |
| Production Runtime | 2027 |
# Resilient inference with fault tolerance
result = resilient_compute.generate(
prompt=prompt,
fault_tolerance=FaultMode.DETECT_AND_RECOVER,
critical_layers=[0, 1, 2, -3, -2, -1],
checksum_interval=4 # Every 4 layers
)
# During execution:
# Layer 23: checksum mismatch detected
# Action: re-execute layers 20-27 only
# Overhead: 3.2% (vs 100% full re-run)
# Result: verified correct
result.confidence # 0.97 (high confidence)
result.faults_detected # 1
result.faults_recovered # 1
Backed by Research
Every runtime primitive is grounded in published research and open benchmarks.
Get Early Access
Join the companies preparing for orbital compute. Start with simulators today, deploy production runtime when hardware is ready.