|
Over two years ago, an MIT research group ran a simulation of the low-Earth orbit broadband constellations of OneWeb, SpaceX, and Telesat, and last January they repeated the simulation updating with revised constellation characteristics and adding Amazon’s Project Kuiper.
They ran the new simulation twice, once using the planned initial deployments of each constellation and a second time using the configuration shown below, which shows final deployments assuming that change requests pending in January would be approved. (SpaceX’s have been approved). I will discuss the second simulation here, and you can consult the paper for the results of the initial deployment simulation.
The following figure shows the total system throughput for each constellation as a function of the number of ground stations and whether or not the satellites have optical inter-satellite links (OISLs), enabling them to route traffic through the in-orbit grid. (The lines show averages, and the shaded regions show interquartile values).
Note that Telesat is committed to having OISLs in all their satellites, and SpaceX will have them in their polar-orbit version 1.5 satellites that are launching this year and in all version 2 satellites starting next year. OneWeb initially planned to include OISLs but decided not to for now and Amazon has not committed to them but has formed an OISL hardware team.
The following figure shows the number of satellites in line of sight (LoS) at full deployment and population as a function of latitude. All Amazon satellites are in inclined orbits, so while major population centers are served, polar regions are not, and the altitudes of the OneWeb and Telesat constellations increase the numbers of satellites in LoS.
If interested, you should read this and the earlier paper (links in the opening paragraph) for details on the methodology, assumptions, and results, but I will conclude with a couple of caveats.
This simulation ignores the 7,518 very low-Earth orbit satellites that have been approved for SpaceX, and the designs of all of the constellations are in flux. SpaceX will soon be launching version 1.5 satellites, followed by version 2 next year. Similarly, OneWeb will be launching improved satellites by the time the constellation is complete, and Telesat and Amazon are still in the design phase.
The simulation assumes that demand is proportional to population (based on a 0.1-degree resolution grid), so mobile utilization by ships, planes, and vehicles is not considered. It also assumes each individual consumes an average of 300 Kbps and the total addressable market is 10% of the global population. As they admit, the 10% is optimistic. (Elon Musk expects 3-5%). Since SpaceX will be charging the same price in every nation, their per-capita subscription rates will vary with national income, and the companies’ target markets vary. For example, Telesat will not market to consumers.
While the specifics will change, and this and other simulations will have to be rerun over time, this simulation considers key variables, and general conclusions can be drawn. For example, in this simulation, maximum throughput is from 13-42% higher when 20 Gbps OISLs are assumed. Currently, only SpaceX and Telesat are committed to OISLs. Still, since OISL technology is improving and they also improve latency, save on ground station cost and enable coverage at sea and other isolated locations, I expect all operators to adopt them eventually. (We may also see OISLs between layers, for example, between Telesat’s LEO and GEO constellations).
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byWhoisXML API
Sponsored byRadix
Sponsored byVerisign
Thanks for posting this summary and update.
I’ve been interested in quantitive measures of Starlink latency, jitter, loss, duplication, and re-sequencing (along with the burst characteristics of these).
My purpose is to have numbers that I can dial into the network emulation gear that my company (InterWorking Labs - iwl.com) builds so that people and vendors can conveniently evaluate how well their gear will work over StarLink (and make any needed adjustments to their code before their own customers are affected.)
I am, of course, interested in the routine behavior of Starlink.
But I am also interested, perhaps more interested, in the corner case conditions that will occur randomly or infrequently on Starlink. These include things such as rain over a ground station, solar blanking due to reflection of the sun off the earth, or a transmitting satellite transiting the sun (or moon) from the point of view of a receiver. I sure that there are other things that can happen - such as bursts of solar radiation, hardware failures on the satellites, cooling and power issues, etc. (Not to mention complaints from astronomers who will be seeing a lot of satellite streaks on photographs.)
I am fond of the aphorism “In theory, theory and practice are the same, in practice they are not.”
I have concern that Starlink, particularly when the inter-satellite links are in use, will exhibit the kinds of bumps that we see in terrestrial switched/routed networks.
Starlink seems to have some built-in opportunities for congestion such as the asymmetry of up/down link radios. And those will be subject to the uncertainties of the atmosphere, most particularly heavy rain over the ground stations. (We see this today on systems such as DirecTv when a thunderstorm crosses over an uplink station.)
When this kind of thing happens will Starlink reroute the traffic via a ground path to another uplink, thus increasing the possibility of congestion there?
With regard to the direct inter-satellite links, things are much more predictable - one can calculate when, from the perspective of each satellite, the other satellites will be transiting across the face of the sun (or moon, or begin to be eclipsed by the earth). And blanking reflections from the earth can also be calculated in advance. In other words, direct inter-satellite routing issues often can be calculated in advance so that we won’t suffer from the convergence periods that happen with more dynamic routing algorithms that are anticipating unpredicted path failures.
Nevertheless, how is Starlink going to manage the queues when the input exceeds the capacity of the outgoing paths?
Internet traffic is almost always bursty - in fact one of the reasons why the Internet’s packet switching model has prevailed over the old telco circuit switched model is that packet switching is essentially a statistical multiplexor that works satisfactorily most of the time.
Right now Starlink is in its infancy, the presented traffic is sill probably rather light and we aren’t really stressing the system. I am curious how Starlink will evolve to simultaneously handle unpredictable problems (thunderstorms over ground stations, satellite failures) and predicable ones (solar blanking) without ending up with packet queue growth in satellites and thus possibly engendering problems such as bufferbloat or long latency coupled with high jitter?
I eagerly await real measurements, real data.
I’d like measures of latency and jitter over a spectrum of packet sizes and inter-packet intervals. I’d like to know the stability of the latency and the frequency and shape of bursts of jitter.
I’d like to see measures of route changes especially those that could cause saturation of a satellite’s queue space or output capacity.
I’d like to see characterizations of the impact of rain at a ground station and of various forms of solar (and lunar) blanking.
Starlink looks promising.
However, we already know that many of our network protocol stacks, when faced with imperfect network conditions, are not always well written or adequately tested. Starlink probably won’t add new forms of network imperfection, but it will possibly deliver a new mix that could affect some Internet code stacks in undesirable ways.
Karl,
Thanks for the detailed, thoughtful comment. You raise about a dozen questions that I have not thought of and are not considered by a simple simulation like the one I described. I wonder to what extent SpaceX has considered or modeled the natural and stochastic contingencies you mention.
To some extent, they will be able to mitigate your concerns by launching more and more satellites and building more ground stations. One group also simulated the possibility of using idle end-user terminals as ground stations (https://www.circleid.com/posts/20191230_starlink_simulation_low_latency_without_intersatellite_laser_links/). Improved technology will also help.
I can’t help thinking that SpaceX (or any of the others) should hire you as a consultant even if they don’t have data for your simulator—have you approached them?
Larry
I just saw this excerpt from an interview of Dave Taht and thought of your comment.
https://youtu.be/AjZXx4N1tmY?t=2727