Network neutrality rules have barred the sort of quality-assurance features for consumer service that Comcast now is preparing to introduce nationwide, in stages. But such rules now are in abeyance in the U.S. market.
The “low-lag” service aims to improve experience for “interactive applications like gaming, videoconferencing, and virtual reality.”
Of course, behind all the marketing hype we can be expected to hear are some physical realities. Since the internet is actually a “network of networks,” either bandwidth or latency issues are generally not under any single participant’s control. No matter what performance is claimed on any single physical infrastructure, the end-to-end path packets take is non-determinstic.
In other words, the exact path cannot be specified rigorously and always. All of which means actual performance is difficult to guarantee. For Comcast, which also uses a hybrid fiber coax access network, there are other practical considerations as well.
It is generally agreed that latency performance actually is best on a fiber-to-home connection, moderate on a hybrid fiber coax or copper digital subscriber line connection and worse on a geosynchronous satellite connection (which is one touted advantage of internet access from low-earth-orbit satellite constellations).
The point is that lots of independent variables must be controlled to ensure low end-to-end latency performance.
Processing delays in routers or switches can affect latency, but so does the distance a packet has to travel and the actual choice of networks over which any particular packet is forwarded.
As a rule, observers expect the lowest latency (1–5 ms) on optical fiber networks. HFC/DSL latency is more often characterized as 10 to 30 ms). Geosynchronous satellite:connections have high latency (500–700 ms).
But latency can happen for any number of reasons. Long distances are an issue. So is network congestion caused heavy router and switch demand at peak hours of usage. Packet routes with more “hops” (segments) will increase latency as well.
Latency might also be increased by heavier concurrent use of many applications on a bandwidth-limited connection as well.
The physical well being of all physical elements (switches, routers, cables, connectors) also makes a difference. Signal interference for Wi-Fi routers or other signal barriers such as walls also make a difference.
Server-side delays on the far end of a consumer’s internet connection also play a role in latency performance.
Latency is an issue different from bandwidth and arguably is a more complex problem to solve.for an internet access end user.
Latency is the delay in data transmission, measured in milliseconds (ms). It represents how long it takes for a data packet to travel from the source to the destination and back. High latency causes lag, which is especially noticeable in real-time applications like video calls, gaming, or financial trading.
ISPs use several techniques to reduce latency, including optimized routing of packets, aided by direct peering arrangements with other transport providers. More-deterministic routing protocols (BGP) also help.
Since distance contributes to latency, content delivery networks are used to put content closer to actual end users. Edge computing and server colocation are forms of this strategy.
Traffic shaping is another possible tactic, allowing some classes of traffic priority over delivery of other less-sensitive traffic. Giving priority to videoconferencing, voice, virtual reality or gaming bits are examples.
Other methods to avoid excessive buffering or congestion also help. It is not clear which of these techniques Comcast will use, but a reasonable guess is “all of the above.”
No comments:
Post a Comment