Orbital Runtime

Execute Workloads
Across Earth and Orbit

Kubernetes doesn't understand orbital mechanics. PyTorch doesn't adapt to solar eclipses. TensorFlow doesn't expect bit flips. We're building the runtime that does.

Status: Simulation and research available now. Production runtime aligned with first orbital DC deployments (2027).

Workload Distribution in Real-Time

Watch how the runtime distributes workloads across orbital and terrestrial nodes based on energy, thermal, and connectivity state.

Live Scheduler View
4 Active Nodes 847 Jobs/min 43% Energy Saved
orbital-1 LEO
Power 847W
Temp 38°C
Latency 34ms
Workload
orbital-2 LEO
Power 340W
Temp -12°C
Latency 28ms
Workload
orbital-3 LEO
Power 0W
Temp -45°C
Latency N/A
Workload
earth-us-west Ground
Power 2.1kW
Temp 22°C
Latency 12ms
Workload
Latest Scheduling Decision
Migrating batch-inference to orbital-1 (energy surplus, 43% savings vs ground)
01

Orbit Scheduler

Where should this workload run? When? At what fidelity?

Orbit Scheduler makes placement decisions based on orbital position, ground station visibility, energy availability, and latency requirements. It's the missing orchestration layer for hybrid Earth-orbit compute.

Capabilities

  • Orbit-aware placement (SGP4/SDP4 integrated)
  • Energy-aware scheduling (solar flux, eclipse, battery)
  • Latency-aware routing (ground stations, ISLs)
  • Predictive handover (pre-migrate before windows close)
  • Hybrid orchestration (span terrestrial + orbital)

Status

Simulator Q2 2026
Production Runtime 2027
Example: Scheduling Decision
Input:
  workload: llm-inference
  model: llama-70b
  latency_sla: 200ms
  priority: normal

Orbit Scheduler Decision:

  # Current orbital state
  orbital_node_1: solar_peak, 847W available
  orbital_node_2: eclipse, battery_only, 340W
  earth_us_west: available, 2.1kW

  # Decision
  route_to: orbital_node_1
  reason: energy_surplus, latency_ok (34ms RTT)
  fallback: earth_us_west

  # Energy savings: 43% vs terrestrial
02

Adaptive Runtime

Inference that bends, not breaks, when power drops.

Standard ML runtimes fail when energy is constrained or thermal limits are reached. Adaptive Runtime dynamically adjusts precision, layer activation, and context length to deliver results within available resources.

Adaptation Strategies

  • Precision scaling (FP16 → INT8 → INT4)
  • Dynamic layer skipping
  • Context window reduction
  • Batch size adjustment (thermal-aware)
  • Graceful degradation (approximate results)

Status

Research Benchmarks Q2 2026
Production Runtime 2027
Example: Energy-Adaptive Inference
# Standard runtime (fails)
result = model.generate(prompt)
# ERROR: Insufficient power during eclipse

# Adaptive Runtime
result = adaptive_runtime.generate(
    prompt=prompt,
    energy_budget=340,  # Watts available
    latency_sla=200,    # ms
    quality=QoS.BEST_EFFORT
)

# Automatically:
# - Reduced precision: FP16 → INT8
# - Skipped layers: 40-55 (non-critical)
# - Context: 8K → 4K tokens
# - Result: delivered in 187ms at 312W
03

Resilient Compute

When bit flips are normal, not exceptional.

In LEO, expect 10-100 single-event upsets per device per day. Standard ML frameworks have no concept of this. Resilient Compute detects corruption, bounds error propagation, and re-executes only what's needed.

Fault Tolerance Mechanisms

  • Activation checksums at layer boundaries
  • Selective re-execution (not full inference)
  • Redundant execution for critical layers
  • Uncertainty quantification on outputs
  • Graceful corruption handling

Status

Fault Injection Framework Q2 2026
Research Benchmarks Q3 2026
Production Runtime 2027
Example: Fault Detection & Recovery
# Resilient inference with fault tolerance
result = resilient_compute.generate(
    prompt=prompt,
    fault_tolerance=FaultMode.DETECT_AND_RECOVER,
    critical_layers=[0, 1, 2, -3, -2, -1],
    checksum_interval=4  # Every 4 layers
)

# During execution:
# Layer 23: checksum mismatch detected
# Action: re-execute layers 20-27 only
# Overhead: 3.2% (vs 100% full re-run)
# Result: verified correct

result.confidence  # 0.97 (high confidence)
result.faults_detected  # 1
result.faults_recovered  # 1

Get Early Access

Join the companies preparing for orbital compute. Start with simulators today, deploy production runtime when hardware is ready.