December 03, 2025 · RotaStellar Team
Conjunction Analysis at Scale: 10,000 Satellites and Growing
The computational challenges nobody talks about-and what we learned building a system to handle them
When we started building our conjunction analysis system, we thought the hard part would be orbital mechanics. Propagate trajectories, compute closest approaches, estimate collision probability. Well-understood problems with established algorithms.
We were wrong. The orbital mechanics are the easy part.
The hard part is doing it at scale-continuously, for 10,000+ active satellites and 35,000+ trackable debris objects, with update latencies measured in seconds instead of hours. The computational complexity isn’t linear. And the naive approaches that work for a handful of satellites fall apart completely when you’re screening a constellation of thousands.
Here’s what we learned.
The O(n²) wall
The textbook approach to conjunction screening is straightforward: for each object in your catalog, propagate its trajectory forward. For each pair of objects, compute the closest approach. If the miss distance is below a threshold, flag it for detailed analysis.
The problem is combinatorics. With n objects, you have n(n-1)/2 pairs to evaluate. At 10,000 objects, that’s about 50 million pairs. At 45,000 objects (the current trackable catalog), it’s over a billion pairs. Per screening window.
You can’t brute-force your way through a billion pair evaluations every few minutes. The computation doesn’t scale.
The standard solution is spatial filtering-using orbital element differences to quickly eliminate pairs that can’t possibly have close approaches. If two objects are in orbits with apogee/perigee bands that never overlap, you don’t need to compute their detailed trajectories. This is well-known and everyone does it.
But here’s what surprised us: spatial filtering gets you maybe 2-3 orders of magnitude reduction. You’re still left with millions of pairs that could interact and need detailed evaluation. For Starlink alone-over 6,000 satellites in similar orbital shells-nearly every satellite can potentially approach every other Starlink satellite. The filtering doesn’t help within a dense constellation.
We needed a different approach.
Time-domain partitioning
The insight that changed our architecture came from thinking about time differently.
Most conjunction screening asks: “Over the next 7 days, which pairs will have close approaches?” This requires propagating all objects for the full screening window and checking all pairs at high time resolution.
We inverted the question: “At this specific future epoch, which objects will be in this specific region of space?” That question is much cheaper to answer.
By partitioning the screening window into discrete time bins (we use 10-second intervals) and partitioning space into hierarchical volume elements, we convert the conjunction search into a series of spatial index queries. Objects get tagged with which space-time cells they’ll occupy. Potential conjunctions only need detailed evaluation when objects share the same cell.
The computational complexity shifts from O(n²) per screening window to something closer to O(n × c) where c is the average number of co-occupants per cell. In sparse orbital regimes, c is tiny. Even in dense constellations, c stays manageable because the time partitioning limits how many objects overlap at any specific moment.
This approach let us hit our latency targets. We can rescreen the entire catalog in under three minutes on modest hardware, with incremental updates processing in seconds.
The covariance propagation problem
Collision probability isn’t just about predicted miss distance. It’s about uncertainty. Two objects predicted to pass 500 meters apart might be fine-or might collide-depending on how confident we are in their orbits.
The standard approach propagates covariance matrices alongside state vectors, using linear error propagation. This works reasonably well for a few days, but we found it systematically underestimates uncertainty for longer prediction horizons and for objects with sparse tracking data.
The problem is that real orbit determination errors aren’t Gaussian. There are systematic biases from atmospheric density uncertainty, solar radiation pressure modeling, and sensor calibration. Linear covariance propagation doesn’t capture these.
We developed what we call “ensemble covariance augmentation”-running parallel propagations with perturbed atmospheric and solar models, then using the spread of outcomes to inflate the covariance envelope beyond what linear propagation suggests. It’s computationally more expensive than analytical propagation, but it produces probability estimates that better match historical conjunction outcomes.
When we backtested against actual close approaches from the past two years, the augmented covariances produced calibrated probability estimates-meaning our 1-in-1000 probability events actually occurred about 1 in 1000 times. The linear-only covariances were systematically overconfident by a factor of 3-5x.
For operational conjunction warnings, calibration matters. Operators get hundreds of conjunction alerts per week for a large constellation. If your probabilities are overconfident, you’re either missing real risks or (more likely) you’ve tuned your thresholds to compensate, which means your risk metrics don’t mean what they say.
The data fusion mess
The single biggest practical challenge wasn’t algorithms. It was data.
Orbital data comes from multiple sources with different formats, different accuracy, different update frequencies, and different systematic biases. Space-Track TLEs. Commercial ephemeris providers. Operator-provided state vectors. Supplemental sensor data. Each source has its own conventions, its own reference frames, its own quirks.
Getting these to play together reliably is engineering drudgery, but it’s where most conjunction systems actually fail. An operator provides a high-accuracy ephemeris for their satellite, but it’s in a slightly different reference frame than the TLEs for the debris catalog. The conjunction screening runs, produces a miss distance that looks safe, but the reference frame inconsistency introduced a 200-meter bias that nobody noticed.
We spent more engineering time on data validation and reconciliation than on orbital mechanics. Every data source gets cross-checked against independent observations. Reference frame transformations are explicit and logged. Systematic biases are estimated and removed. It’s not glamorous work, but it’s where the actual reliability comes from.
What the scale reveals
Operating at catalog scale reveals patterns you don’t see with smaller datasets.
We noticed that certain orbital shells have conjunction statistics that don’t match what you’d expect from uniform random distributions. There’s temporal clustering-more close approaches happen at certain times of day and certain days of week than randomness would predict.
When we dug in, we found the cause: coordinated maneuvering. Constellations perform station-keeping maneuvers on schedules. Those maneuvers temporarily modify collision probability in ways that don’t show up in TLE-based screening, which typically lags the actual maneuver by hours or days.
The implication is that traditional conjunction screening-based on publicly-available TLEs-systematically misses maneuver-induced risk spikes. For operational awareness, you need either predictive maneuver models or real-time data feeds. We’re working on both.
What’s next for us
The current catalog of 45,000 trackable objects will grow to 100,000+ within five years as debris tracking improves and new constellations launch. Our architecture is designed to scale to that, but we’re already thinking about the next bottleneck.
The fundamental limit isn’t computation-it’s data latency. How quickly can we get updated observations, process them into orbits, and propagate conjunctions? The physics of orbital dynamics means objects can move kilometers per second. Minutes matter.
We’re exploring partnerships for lower-latency observation data. And we’re building predictive models that can estimate where objects will be even before we have tracking updates-buying back some of the latency budget.
The conjunction analysis problem isn’t solved. It’s managed, temporarily, until the next doubling of catalog size forces another architectural rethink. That’s the nature of scaling.
Our conjunction analysis is available through the Orbital Intelligence Platform. See it in action.