Friday, January 31, 2025

Comcast "Low Lag" Consumer Internet Access Service Gets Commercialized

Network neutrality rules have barred the sort of quality-assurance features for consumer service that Comcast now is preparing to introduce nationwide, in stages. But such rules now are in abeyance in the U.S. market. 


The “low-lag” service aims to improve experience for “interactive applications like gaming, videoconferencing, and virtual reality.” 


Of course, behind all the marketing hype we can be expected to hear are some physical realities. Since the internet is actually a “network of networks,” either bandwidth or latency issues are generally not under any single participant’s control. No matter what performance is claimed on any single physical infrastructure, the end-to-end path packets take is non-determinstic. 


In other words, the exact path cannot be specified rigorously and always. All of which means actual performance is difficult to guarantee. For Comcast, which also uses a hybrid fiber coax access network, there are other practical considerations as well.


It is generally agreed that latency performance actually is best on a fiber-to-home connection, moderate on a hybrid fiber coax or copper digital subscriber line connection and worse on a geosynchronous satellite connection (which is one touted advantage of internet access from low-earth-orbit satellite constellations). 


The point is that lots of independent variables must be controlled to ensure low end-to-end latency performance. 

Latency Source

Typical Contribution (%)

Description

In-Home Network (Wi-Fi, Router, LAN)

5–20%

Wi-Fi interference, old routers, and internal LAN delays can introduce latency. Ethernet generally has lower latency than Wi-Fi.

ISP Core & Access Network

10–30%

Delays within the ISP's infrastructure, including fiber, cable, or DSL transmission, routing, and congestion effects.

Internet Backbone & Peering

20–50%

Transit across multiple networks, routing inefficiencies, and the number of hops between ISPs contribute to this latency.

Far-End Server Processing

10–40%

The speed at which the destination server processes and responds to requests, affected by server load, geographic distance, and CDN availability.


Processing delays in routers or switches can affect latency, but so does the distance a packet has to travel and the actual choice of networks over which any particular packet is forwarded. 


As a rule, observers expect the lowest latency (1–5 ms) on optical fiber networks. HFC/DSL latency is more often characterized as 10 to 30 ms). Geosynchronous satellite:connections have high latency (500–700 ms).


But latency can happen for any number of reasons. Long distances are an issue. So is network congestion caused heavy router and switch demand at peak hours of usage. Packet routes with more “hops” (segments) will increase latency as well. 


Latency might also be increased by heavier concurrent use of many applications on a bandwidth-limited connection as well. 


The physical well being of all physical elements (switches, routers, cables, connectors) also makes a difference. Signal interference for Wi-Fi routers or other signal barriers such as walls also make a difference. 


Server-side delays on the far end of a consumer’s internet connection also play a role in latency performance. 


Latency is an issue different from bandwidth and arguably is a more complex problem to solve.for an internet access end user.


Latency is the delay in data transmission, measured in milliseconds (ms). It represents how long it takes for a data packet to travel from the source to the destination and back. High latency causes lag, which is especially noticeable in real-time applications like video calls, gaming, or financial trading.


ISPs use several techniques to reduce latency, including optimized routing of packets, aided by direct peering arrangements with  other transport providers. More-deterministic routing protocols (BGP) also help. 


Since distance contributes to latency, content delivery networks are used to put content closer to actual end users. Edge computing and server colocation are forms of this strategy. 


Traffic shaping is another possible tactic, allowing some classes of traffic priority over delivery of other less-sensitive traffic. Giving priority to videoconferencing, voice, virtual reality or gaming bits are examples. 


Other methods to avoid excessive buffering or congestion also help. It is not clear which of these techniques Comcast will use, but a reasonable guess is “all of the above.”


No comments:

Will AI Enhace, Degrade or Have No Impact on Human Creativity? Yes, Yes and Yes

One reason we seemingly will continue debating whether use of artificial intelligence enhances, degrades or has indeterminate impact on huma...