Showing posts with label latency. Show all posts
Showing posts with label latency. Show all posts

Thursday, November 3, 2011

Verizon API Will "Turbo" Mobile Broadband

Verizon will publish an application programming interface that could allow mobile consumers to "turbocharge" the network bandwidth their smartphone apps use, presumably for a small additional fee.



"I think one of the things that you could do is guaranteed quality of service," said Hugh Fletcher, associate director for technology in Verizon's Product Development and Technology team. 
"One of the things that we are right now is very democratic in terms of allocating spectrum and bandwidth to users. And just because you request a high quality of service doesn't mean you're gonna get it. [The network] will try to give it to you, but if there's a lot of congestion, a lot of people using it, it won't kick people off," said Fletcher. Verizon API To Give Apps 'Turbo'

The network optimization API will likely expose attributes like jitter, latency, bandwidth, and priority to app developers, Fletcher said. 


Despite expected complaints from some network neutrality advocates, there is a reason such an API might provide clear value to end users. Some of you might be using 3G or 4G networks, using different air interfaces, to use interactive cloud applications. If you do that often enough, on many networks, you will have discovered the experience problem caused by latency. 


Where older GPRS or EDGE data networks featured round-trip latencies in the 600 millisecond to 700 msec. range, LTE networks feature round-trip latencies in the 50 msec. range. 


One of the important elements of a cloud-delivered application experience is latency performance, even though we most often think of "bandwidth" as being the key "experience" parameter. 


Some might say the key benefits will be for gaming apps, but many of us can assure you that other interactive apps, even those not intrinsically dependent on "real time" protocols, can suffer from mobile latency. Latency issues




Monday, August 23, 2010

"More Bandwidth" Does Not Necessarily Cure Latency Issues

Though it can help, increasing bandwidth on any network experiencing latency issues does not necessarily fix the problem.

Congestion and forwarding delay are the more important types of latency on an network, and are not entirely independent. As a network element is subjected to heavy load, it may need additional queue time to handle and
process the increased volume of traffic, which causes forwarding delay.

But there are other sources, as well. Serialization delay is the most constant, having only a small influence on end-to-end latency. Propagation delay, typically stable in circuit-switched networks, can be irregular and introduce jitter over routed networks.

As network congestion can have a large impact on end-to-end latency, affecting both forwarding and pure congestion (queuing-related) delay.

Reducing traffic bottlenecks therefore is a key part of network management and design. Increasing capacity (available bandwidth) should, at least in theory, help reduce congestion when applied to network “pinch points”.

However, increasing throughput does not always lead to the expected decrease in latency, even if
congestion is reduced. Results will vary depending on implementation, network architecture, traffic
patterns, and a number of other factors.

white paper here

Wednesday, August 4, 2010

Clearwire Announces LTE Tests

The other shoe has not yet formally dropped, but Clearwire now says it will conduct Long Term Evolution tests across its network, including both tests of frequency division and time division versions of LTE, plus the ability of LTE air interface technologies to coexist harmoniously with the existing WiMAX air interface already in use.

The tests do not definitively confirm a partial switch to LTE, but are a concrete bit of evidence that LTE will be part of Clearwire's future.

Clearwire intends to conduct FDD LTE (Frequency Division Duplex) tests using 40 MHz of spectrum, paired in 20 MHz contiguous channels, of its 2.5 GHz spectrum. Clearwire expects to confirm the capability to produce real-world download speeds that range from 20 Mbps to 70 Mbps. This is expected to be significantly faster than the 5 Mbps to 12 Mbps speeds currently envisioned by other LTE deployments in the U.S., which will rely on smaller pairs of 10 Mhz channels or less.

Clearwire will concurrently test TDD LTE (Time Division Duplex), in a 20 MHz configuration, which is twice the channel size currently used in its 4G WiMAX deployments.

Clearwire will also test WiMAX co-existence with both FDD LTE and TDD LTE to confirm the flexibility of its network and spectrum strength to simultaneously support a wide-range of devices across its all-IP network.

My own anecdotal experience with Clearwire's network is that, as you would expect, 4G is faster than 3G. But I have to say my experience also points out how much end user application latency is to be found elsewhere in the delivery ecosystem, such as the far-end servers. I also would observe that the 4G network signal seems more fragile than the 3G signal. Even in areas with both 4G and 3G available, the 4G often loses enough signal strength that my smartphone defaults back to 3G.

I'm not complaining, just noting that, as with many earlier increases in access bandwidth, faster is better, up to a point. If nothing else, having more access bandwidth simply points out latency elsewhere in the ecosystem.

Tuesday, June 29, 2010

Is Bit Prioritization Necessary?

Network neutrality proponents, especially those supporting the "strong" forms such as an outright ban on any bit priorities, believe that next generation networks will have ample bandwidth to support all real-time services without the need for prioritizing mechanisms.

Users of enterprise networks might react in shock to such notions, as shared networks often encounter latency and bandwidth constraints that are overcome precisely by policy control. And despite increasing bandwidth on mobile networks, users and network service operators already know that congestion is a major problem.

And the evidence does not seem to support the notion that applications are not affected by congestion, or that use of two or more applications does not create externalities that impair real-time application performance.

"I measured my jitter while using Netflix (Jitter occurs when an application monopolizes a router’s transmit queue and demands that hundreds of its own packets are serviced before any other application gets a single packet transmitted) and found an average jitter of 44 milliseconds and a worse case that exceeds 1000 ms," says Ou.

Wednesday, June 23, 2010

World Cup Affects U.K. Latency, Packet Loss


As you would expect, the World Cup has driven up video consumption. Timico, a U.K. Internet access provider, set a new record for video usage online, with usage up 309 percent over average. So what's happened on the technical side that affects end user experience?


Higher latency and packet loss seem to be the main effects."Some users may only see a slight increase in latency or a small amount of packet loss whilst for others latency has quadrupled and packet loss is in the region of five percent," says ThinkBroadband.

In one test, the average latency has quadrupled from around 28 milliseconds to approximately 120 milliseconds at peak World Cup time and packet loss increased as well.

The hit to user experience is video occasionally breaking up or freezing from packet loss, while websites would load more slowly.

http://www.thinkbroadband.com/news/4282-can-the-uk-broadband-network-cope.html

Will AI Actually Boost Productivity and Consumer Demand? Maybe Not

A recent report by PwC suggests artificial intelligence will generate $15.7 trillion in economic impact to 2030. Most of us, reading, seein...