Showing posts with label bandwidth. Show all posts
Showing posts with label bandwidth. Show all posts

Wednesday, October 6, 2010

Internet Traffic up 62% in 2010

Global Internet traffic has grown 62 percent in 2010, after logging 74 percent growth in 2009.

The growth in traffic is coming from non-mature markets likes Eastern Europe and India, where traffic growth between mid-2009 and mid-2010 was in excess of 100 percent, says Telegeography notes.

In the Middle East, traffic rose just under 100 percent. Traffic in mature markets also is growing rapidly. Western European international Internet traffic increased 66 percent, and the U.S. and Canadian international Internet traffic climbed 54 percent.

In some ways, the growth rates are not news. What would be news is if the amount of traffic demand did not grow about 60 percent.

Saturday, June 12, 2010

iPad Internet Usage Patterns Compared to Smartphone and PC

Normalizing Internet appliance behavior by setting Apple iPhone usage as the baseline, you can see pretty clearly that smartphone web behavior is distinct from PC usage.

So far, iPad usage (page views) is roughly twice what iPhone usage typically is, but less than what people tend to do on either Windows or Macintosh PCs.

Page views aren't the same thing as "bandwidth consumed," but you can see the pattern: desktop usage is heavier than smartphone patterns.

One suspects today's PC dongle user has a usage pattern more similar to an iPad user than a desktop user. Most of us probably think page view and bandwidth usage will intensify over time on every platform, but that the disparity between PC desktop and "phone" behavior will remain.

There likely are some people who view more web pages on their phones than on their desktops. Generally speaking, though, heavier use occurs on a PC, while smartphone usage is much lower, volume-wise.

Monday, April 19, 2010

YouTube Consumes 10% of Business Bandwidth, Study Finds

YouTube now consumes about 10 percent of business network bandwidth, while Facebook represents 4.5 percent of all consumed bandwidth, a new study by Network Box finds.

Windows updates represent about 3.3 per cent of all bandwidth used, Yimg (Yahoo!'s image server) 2.7 per cent of all bandwidth used and Google – 2.5 per cent of all bandwidth used.

When looking at traffic, rather than bandwidth consumption, Facebook is the top site visited on business networks. Network Box's analysis of 13 billion universal resource locators used by businesses in the first quarter of 2010 shows that 6.8 per cent of all business internet traffic goes to Facebook, an increase of one per cent since the last quarter of 2009.

Google vists represent 3.4 per cent of all traffic, Yimg (Yahoo!'s image server) 2.8 per cent of all traffic, Yahoo 2.4 per cent of all traffic and Doubleclick about1.7 per cent of all traffic.

The company also found that, of 250 IT managers surveyed about their biggest security concerns over the coming year,  the top concern was "employees using applications on social networks" while at work, with 43 per cent of respondents saying this is a major concern.

"The figures show that IT managers are right to be concerned about the amount of social network use at work," says Simon Heron, Network Box internet security analyst says.

Such measurements always are a bit imprecise, not in terms of URLs visited or bandwidth consumed, but in terms of business or personal use. Business users increasingly are using YouTube business videos for work, while some Facebook use undoubtedly also reflects business purposes.

Monday, February 22, 2010

Mobile Signaling Causes Congestion, Not Bandwidth

Executives highly familiar with mobile broadband network operations know that radio networks can, and do, become congested for reasons having to do with signaling, rather than bandwidth consumption. Executives at Spirent and Alcatel-Lucent Bell Labs,  for example, have pointed out that mobile phone design can itself cause problems.

As it turns out, that is true of the iPhone as well, which tries to save power by disconnecting from the network whenever possible.

Now engineers at U.K. mobile provider O2 point out that the iPhone uses more power-saving features than previous smartphone designs. That's good for users, but bad for radio networks.

Most devices that use data do so in short bursts—a couple e-mails here, a tweet there, downloading a voicemail message, etc. Normally, devices that access the data network use an idling state that maintains the open data channel between the device and the network.

However, to squeeze even more battery life from the iPhone, Apple configured the radio to simply drop the data connection as soon as any requested data is received. When the iPhone needs more data, it has to set up a new data connection, O2 engineers say.

The result is more efficient use of the battery, but it can cause problems with the signaling channels used to set up connections between a device and a cell node.  Simply put, the signaling overhead congests the network, not the bearer channels. It is signaling load, not bandwidth consumption, that causes much congestion.

It's important to note, however, that this technique is not limited to the iPhone. Android and webOS devices also use a similar technique to increase battery life. While the iPhone was the first and currently most prolific device of this type, such smartphones are quickly becoming common, and represent the majority of growth in mobile phone sales in the past year.

Networks designed to handle signaling traffic dynamically, shifting more spectrum to signaling channels when needed, can mitigate this problem. But even with more signaling capacity, network nodes may not be able to set up a data session, or may have problems getting a valid network address from an overloaded DHCP server.

In fact, the fact that Europe embraced heavy text messaging and data use far earlier than users in the United States meant that the signaling networks were configured early on for heavy signaling traffic.

Tuesday, February 16, 2010

10,000% Mobile Bandwidth Increase by 2015 is Just One Problem, Nokia Siemens Says

 Mobile data from smart devices will increase 10,000 percent by 2015, says Rajeev Suri, Nokia Siemens Networks CEO. And that's only part of the problem. The other issue is signaling overhead, apart from bearer traffic, that can tie up radio ports even when not much actual bandwidth is being consumed.

That's one reason "adding capacity" does not solve all congestion issues in a mobile network. Congestion also can occur when signaling load is heavy. But bandwidth growth is an issue.

Nokia Siemens Networks predicts that by 2015, annual mobile data traffic will reach 23 exabytes , equivalent to 6.3 billion people each downloading a digital book every day.

Pure capacity is just one issue, however, intermittent connectivity and shifting locations plus signaling overhead are problems as well.

As an example, having predicted the current surge in smarter mobile devices, Nokia Siemens Networks is the only vendor to have built into its networks an industry standard, already common in smart phones, that allows it to reduce unprofitable, congestion-causing signaling by three times while increasing smart device battery life, says Suri.

Monday, December 21, 2009

Video Represents 99% of Consumer Information Consumption



Reduced to bytes, U.S. consumers in 2008 imposed an information transfer "load" of about 34 gigabytes a day, say Roger E. Bohn, director, and James E. Short, research direction of the Global Information Industry Center at the University of California, San Diego. That works out to about seven DVDs worth of data a day.

And that isn't even the most-significant potential implication. We are used to hearing about consumption of media or information in terms of "time," such as hours consumed each day. But Bohn and Short also look at information flows in terms of "bandwidth."

If one looks at consumption based on the "hours of use," video accounts for possibly half of total daily consumption.

If one looks at the flows in terms of compressed bytes, or actual bandwidth required to deliver the information, then video represents 99 percent of the flow volume.

That has huge implications for the design of any nation's communications and "broadcasting" networks. To the extent that virtually all information now is coded in digital form, a shift of consumption modes (from watching linear satellite, cable or telco TV to Internet delivery) can have huge effects.

Recall that video bits now represent 99 percent of bandwidth load. But also note that most of that load is delivered in the most-efficient way possible, by multicasting a single copy of any piece of information to every potential consumer all at once. It requires no more bandwidth to serve up an event watched by 500 million people than one person.

That is why video and audio networks historically have been designed as "mutlicast" networks. They are the most effiecient way of delivering high-bandwidth information.

If more video starts to move to Internet delivery, the bandwidth requirements literally explode. To deliver one identical piece of content to 500 million Internet users requires 500 million times as much bandwidth as the "old" multicast method, in at least the access link. If network architects are ruthlessly efficient and can cache such content at the edge of the network, wide area bandwidth consumption is reduced and the new load is seen primarily on the access networks.

All of this suggests a rational reason for maintaining "multicast" video entertainment networks, and not shifting all consumption to unicast Internet delivery. It is extremely inefficient and wasteful of network resources. To the extent that much "on demand" viewing of popular professional content can be satisifed by local storage (digital video recorders), this should be done.

On-demand viewing of YouTube content is harder to rationalize in that manner. For the same reason, local storage of computer games, where possible, makes sense. Interactive, "live" gaming does not allow such offloading, and will contribute hugely to Internet bandwidth demand, just as viewing of YouTube videos is doing.

“Information," representing flows of data delivered to people from 20 sources, is likely to be much higher the next time the researchers replicate the study, because television, which accounts for nearly half of total consumption, now has shifted from analog NTSC to high-definition, which imposes a greater information load.

Television consumption represents about 41 percent of the daily consumption, but computer and video games represent 55 percent of the flow. Add ratio and TV and those two sources represent 61 percent of the flow.

But there is another important implication: the researchers counted "compressed" information, or "bandwidth," in addition to more-familiar metrics such as hours of consumption.

Looked at in this way, the researchers say, "led to a big surprise." In fact, only three activities--television, computer games and movies account for 99 percent of the flow. All other sources, including books, mobile or fixed voice, newspapers, radio or music, contribute only one percent of total load.

The researches also point out that they count bytes as part of the  "information flow" only when users actually consume the information. Data stored on hard drives or TV or radio signals not being watched or listened to does not count in the research methodology.

The researchers also point out that if “personal conversation” is considered a source of information, then high-quality "tele-presence" applications that actually mimic talking to a person in the same room would require about 100 Mbps worth of communications load.

Three hours of personal conversation a day at this bandwidth would be 135 gigabytes of information, about 400 percent more than today's average consumption.

Saturday, October 31, 2009

Will Moore's Law "Save" Bandwidth Providers, ISPs?

In the personal computer business there is an underlying assumption that whatever problems one faces, Moore's Law will provide the answer. Whatever challenges one faces, the assumption generally is that if one simply waits 18 months, twice the processing power or memory will be available at the same price.

For a business where processing power and memory actually will solve most problems, that is partly to largely correct.

For any business where the majority or almost all cost has nothing to do with the prices or capabilities of semiconductors, Moore's Law helps, but does solve the problem of continually-growing bandwidth demand and continually-decreasing revenue-per-bit that can be earned for supplying higher bandwidth.

That is among the fundamental problems network transport and access providers face. And Moore's Law  is not going to solve the problem of increasing bandwidth consumption, says Jim Theodoras, ADVA Optical director of technical marketing.

Simply put, most of the cost of increased network throughput is not caused by the prices of underlying silicon. In fact, network architectures, protocols and operating costs arguably are the key cost drivers these days, at least in the core of the network.

The answer to the problem of "more bandwidth" is partly "bigger pipes and routers." There is some truth that notion, but not complete truth. As bandwidth continues to grow, there is some point at which the "protocols can't keep up, even if you have unlimited numbers of routers," says Theodoras.

The cost drivers lie in bigger problems such as network architecture, routing, backhaul, routing protocols and personnel costs, he says. One example is that there often is excess and redundant gear in a core network that simply is not being used efficiently. In many cases, core routers only run at 10 percent of their capacity, for example. Improving throughput up to 80 percent or 100 percent offers potentially an order of magnitude better performance from the same equipment.

Likewise, automated provisioning tools can reduce provisioning time by 90 percent or more, he says. And since "time is money," operating cost for some automated operations also can be cut by an order of magnitude.

The point is that Moore's Law, by itself, will not provide the solutions networks require as they keep scaling bandwidth under conditions where revenue does not grow linearly with the new capacity.

Friday, October 23, 2009

How Long Will 40 Gbps, 100 Gbps Networks Last?

The problem with networks is that they do not last as long as they used to, which means they need to be upgraded more frequently, which also means the ability to raise capital to upgrade the networks is a bigger issue than it once was.

Qwest CTO Pieter Poll, for example, notes that Qwest's bandwidth growth now is 45 percent growth compounded annually, or nearly doubling every two years or so. That in itself is not the big problem, though. The issue is that consumers driving most of that new consumption do not expect to pay more for that consumption increase.

"From my perspective, the industry really needs to focus on tracking down cost per bit at the same rate, otherwise you'll have an equation that's just not going to compute," says Poll. Whether on the capital investment or operating cost fronts, adjustments will have to be made, one concludes.

Still, raw bandwidth increases are not insignificant. "If you look at 2008 for us it was unprecedented in terms of the work we did in the backbone," says John Donovan, AT&T CTO. "The capacity we carried in 2008, five years out, will be a rounding error.

Donovan notes that AT&T's 2 Gbps backbone lasted 7 years, the10 Gbps backbone lasted five years, while the 40 gigabit will last three years.

By historical example, one wonders whether 100-Gbps networks might last as little as two years before requring upgrades.

Donovan suggests carriers will have to rethink how they design networks, how routing is done and how content bits get moved around. One suspects there might be more use of regional or local caches, to avoid having so many bits traverse the entire backbone network.

Monday, March 24, 2008

Bandwidth Demand: Increasing Faster than Moore's Law


The thing about technological change is that lots can change underfoot without people really noticing it. And then some point is reached where the accumulated weight of those changes causes a tipping point. And we might be watching for such a tipping point in business broadband.

You'd be hard pressed to find much widespread evidence of the trend if you look at what small businesses are buying, but if one looks at enterprises, "T1 and DS0 already starting to go away," says Pieter Poll, Qwest chief technology officer. "More and more people are preferring metro Ethernet at the high end, so low-speed private line revenue and demand is decreasing."

At some point that will start to be a bigger, or more noticeable trend within the smaller and mid-sized business market as well, simply because the bandwidth intensity of modern business and consumer applications is increasing.

Average 2007 IP traffic was over 9,000 terrabytes a day in the consumer segment, for public Internet. The average in 2012 will be over 21,000 terrabytes a day, Poll says.

Qwest itself "sees our data networks doubling traffic every 16 months," Poll noes. "That's a faster rate of increase than Moore's Law," says Poll.

"There are just more customers, more are wireless and content also is shifting to richer media," he says.

"Our residential broadband base grows traffic 39 percent annually, no matter what size pipe they buy," Poll notes.

All of which has got Qwest's planners looking for ways to grow bandwidth faster. "We are doomed over time if bandwidth demand grows faster than Moore's Law," says Poll.

So how does Qwest do that? IP directly over optical waves, where the router and the optical transmitter are all one device. Meshing the edge devices also helps, as it reduces backbone network hops, and hence bandwidth usage.

Poll also thinks major backbone providers will start swapping fiber to gain greater topological diversity and improve protection from fiber cuts.

As for his views on where the next increment of backbone bandwidth will come, Poll notes there are some carriers wanting 40 Gbps equipment, though he personally thinks running 10 Gig E waves is more affordable. Still, "that could change soon," says Poll.

The bigger issue for him is that 40 Gbps will be stranded investment when 100 Gbps equipment is available.

"My personal feeling is that 100 Gbps is the step we want," says Poll.

What Declining Industry Can Afford to Alienate Half its Customers?

Some people believe the new trend of major U.S. newspapers declining to make endorsements in presidential races is an abdication of their “p...