Saturday, May 18, 2013

Two Orders of Magnitude More Access Speed Within 10 Years? History Says "Yes"

If history is any guide,  gigabit Internet access will not be at all rare in a decade, though how much demand for 1-Gbps might well hinge on retail pricing levels.

In August 2000, only 4.4 percent of U.S. households had a home broadband connection, while  41.5 percent of households had dial-up access. A decade later, dial-up subscribers declined to 2.8 percent of households in 2010, 68.2 percent of households subscribed to broadband service.

If you believe gigabit access is to today’s broadband as broadband was to dial-up access, and you believe the challenge of building gigabit networks roughly corresponds to the creation of broadband capability, a decade might be a reasonable estimate of how long it will take before 70 percent of U.S. homes can buy gigabit access service, and possibly 50 percent do so.

Consider that by June 2012 about 75 percent of U.S. households could buy a service of at least 50 Mbps, while half could buy service at 100 Mbps. So it took about a decade to put into place access at two orders of magnitude higher than the baseline (dial-up speeds).

The key distinction is between “availability” and “take rate.” Even though consumers are starting to buy faster access services, most seem to indicate, by their buying behavior, that 20 Mbps or 25 Mbps is “good enough,” when it is possible to buy 50 Mbps or 100 Mbps service.

In the U.K. market, for example, though service at 30 Mbps is available to at least 60 percent of homes,  buy rates were, in mid-2012, at about seven percent (to say nothing of demand for 100 Mbps).  

The European Union country with the highest penetration of such services was Sweden, at about 15 percent, in mid-2012.

To be sure, retail prices are an issue. With relatively few exceptions, U.S. consumers tend to buy services up to 25 Mbps, and price for a gigabit service is probably the main reason.

That is the reason behind Google Fiber's disruptive pricing for gigabit access at $70 a month. That pricing umbrella implies far lower prices for 100 Mbps service than consumers can buy at the moment.

And that is every bit as important as the headline gigabit speed. If a gigabit connection costs $70 a month, will a 100-Mbps connection cost $35 a month?

Friday, May 17, 2013

Fixed Networks are for "Capacity," Mobile Networks are for "Coverage"


These days, in many markets, people using smart phones are on the fixed networks for Internet access, more than on the mobile network. 

In North America, as much as 82 percent of Internet traffic from smart phones occurs indoors, where users can take advantage of Wi-Fi, instead of the mobile network,

suggests.


In Western Europe, as much as 92 percent of Internet usage from smart phones occurs indoors.

So to a large extent, that means the fixed network provides “capacity,” while the mobile network provides “coverage,” a statement that describes the two ways a small cell can provide value for an urban mobile network as well.

For the most part, Wi-Fi offload happens mostly in the office and the home. Some small cells will include Wi-Fi access, but the volume of Internet activity still occurs indoors, not outdoors where small cells will reinforce the mobile macrocell network.

Some tier one carriers have moved to create their own networks of public Wi-Fi hotspots, and many can serve customers from owned fixed networks as well. That makes the fixed network and public Wi-Fi a direct part of the mobile network.

In other cases, carriers simply passively allow their devices to use non-affiliated Wi-Fi networks, as when a mobile service provider allows a user to roam onto a fixed network owned by another service provider. 

That is one more example of the loosely-coupled nature of the Internet ecosystem. A mobile provider can offload traffic to another carrier with which it has no formal business relationship. 








Thursday, May 16, 2013

Will TV White Spaces Be Important?


Whether TV white spaces spectrum is going to be a big deal or not might hinge on how much real spectrum is available in given markets, plus manufacturing volume to get retail prices of network elements down to a level where the spectrum has a reasonable business case.

At a high level, it isn’t going to help as much in urban areas, where interference issues are more constraining.

It might prove quite important in some rural areas where is a lot more bandwidth because there are enough people living in a region to create incentives for TV broadcasters. In areas where few people live, there might be lots of bandwidth, just few potential users or customers. Every location is different.

While at least 6 megahertz is available throughout most of the United States, there are a few locations where there is much more spectrum available. Though most of the spectrum cannot be used in most locations, the white spaces band includes 150 MHz of spectrum in total.  


Other sources of lightly-regulated or unlicensed spectrum might be made available in the future. And new technologies such as agile radio and ultra-wideband technologies are available, but regulator action is required to enable use of such technologies.

And though the general rule has been that spectrum is licensed to entities for specific purposes, unlicensed spectrum might be crucial.

"Unlicensed" Spectrum Doesn't Always Mean "You Can Use it" Without Paying


It sometimes is easy to forget that it isn’t as easy to become an ISP in some nations, as in others. Consider the matter of “unlicensed spectrum,” for example.

“Unlicensed” spectrum exists. But use of that spectrum is, in about 66 percent of nations, is not really non-licensed. Based on responses from 75 countries, 33 percent of national regulators require a license to use 2.4, or 5 GHz “unlicensed” spectrum, a study found.

Another 33 percent of national regulators require obtaining an operating license, though not a spectrum license. About 33 percent do not require a license of any type. In a small fraction of cases (two percent) use is forbidden.

"Mix and Match" is one Advantage of Software Defined Networks


If you wanted to rip cost out of any device, app or network, what are some of the things you’d do? You’d remove parts. You’d probably reduce size.

Shoretel, for example, sells docking stations for iPhones and iPads, for example, that allow the iOS devices to act as the CPU, when the docking station providing all the peripheral support.



And that's as good an example as any of how a network designer would try and wring cost out of a network.


You’d rely as much as possible on functionality that could be supplied by other common CPUs. 

You’d pick passive solutions, not active solutions, as often as possible. You’d simplify with an eye to improving manufacturing tasks.

You also would create a “mix and match” capability about specific makes and models of network gear. You’d want to be able to use a variety of network elements, made by many different suppliers, interchangeably.

You’d create a network using common, commodity-priced devices as much as possible.

In other words, you would make devices, networks and apps "as dumb as possible," and as flexible and interchangeable as possible.

If you think about software defined networks, that’s an application of the principle. Not “everything” about SDN is “as dumb as possible;” only the data plane elements.

The corollaries are that such approaches create networks that also are “as flexible as possible” and “as affordable as possible.”

The control plane you would still want to be as “smart as possible,” and you would be able to afford to do so, since the key to much cost reduction is the ability to share a physical resource across a broad number of end users, subtending devices or nodes.

That is why the per-user or per-customer cost of an expensive headend is rather low, as a percentage of the total cost of a network. On the other hand, the total cost of smart CPUs (decoders) used by every end user or customer is so high because there is so little sharing possible: each customer needs one or more decoders.

That was what drove cable operator Cablevision Systems to adopt a “network digital video recorder” strategy. By centralizing the CPU functions, the ability to share the cost of processing infrastructure was vastly improved.

The broader principle is that one proven way to reduce cost, increase flexibility and enhance functionality is to separate and centralize control plane (CPU) functions from the data plane functions that are widely distributed throughout a network.

That’s the whole point of software defined networks.


What Does Your Business Look Like if Access Bandwidth is Not a Constraint?

There is one thread that underlies thinking and business strategy at firms as disparate as Google, Netflix and Microsoft, namely Moore's Law. Even if communications does not scale in the same way as memory and processing, Moore’s Law underpins progress on the communications front, at least in terms of signal compression, the power of network elements and cost of those elements and systems built on those building blocks.  


As Intel CEO Paul Otellini tells the story, Moore’s Law also implied an inverse relationship between volume and price per unit. Over time, processing and memory got more powerful and cheaper in a linear way.


The implication for Intel was that it would have to shift from producing small numbers of components selling for high prices to a market where very large numbers of very cheap components was the context of the business. “Towards ultra-cheap” is one way to describe the progression of retail prices.

You might argue that assumption also drove Microsoft’s decisions about its software business (“what does my business look like if computing hardware is very cheap?”), the confidence Netflix had that broadband would support high-quality streaming (“Will access bandwidth be where it must to support our streaming business?”) and the many decisions Google makes about the ability to support software-based businesses using advertising.

You might argue that the emergence of cloud computing is reshaping existing content and software businesses precisely because of the question “what would my business look like if access bandwidth were not a constraint?”

For Intel, the implications were a radical change in component pricing, reflected back into the way the whole business has to be organized.


Ubiquiti illustrates a related principle, namely the role of disruptive pricing in a market. Ubiquiti has operating expense in single digits where a traditional technology supplier has operating expense in the 30 percent to 60 percent range.


That allows Ubiquiti to sell at retail prices competitors cannot easily match.

source Justin Sullivan/Getty Images

BT Changes Mind About Branded Mobile Service


BT appears to have changed its mind about the retail mobile market. Having won 4G spectrum (2x15 MHz of FDD and 20 MHz of TDD 2.6GHz spectrum), BT suggested it would not build a national retail network but use the 4G spectrum as a way to augment its fixed network operations.

Now BT says it will launch its own retail 4G network. The thinking is that BT will source wholesale mobile connectivity from one of the U.K. mobile service providers to provide full mobile access, while using its own spectrum largely for fixed or location access.

That raises some interesting new questions. BT is not the first service provider to imagine using a mix of wholesale “mobile” access and “Wi-Fi access whenever possible.” Republic Wireless, for example, is using precisely that approach, offloading Internet access to Wi-Fi whenever possible.

But the new issue is the degree to which Wi-Fi roaming could allow an ISP to create an “untethered” but not fully mobile service offering, as cable operators basically are doing with their public hotspot networks, creating a national Wi-Fi roaming capability.

In BT’s case, wholesale mobile spectrum would allow users to use the Internet when they are in transit, with the expectation that most Internet use will happen when people are at home, at work, or within range of a public Wi-Fi hotspot.

That is why some believe small cells incorporating Wi-Fi will be a game changer for mobile service providers, easing heavily congested data pipes while linking together billions of devices into a single network architecture, according to the IHS iSuppli.

Small cells--low-power base stations each supporting approximately 100 to 200 simultaneous users--will augment mobile coverage and capacity in dense urban areas.

That is the mirror image of the BT approach, which augments fixed coverage with a mobile overlay.

So where mobile operators will use Wi-Fi to offload mobile traffic, BT essentially will use mobile to augment and “upload” fixed traffic.

But both of those approaches blend “mobile” and “fixed” Internet access. The unknown is whether there could arise a market for Wi-Fi-only devices that take advantage of the growing availability of Wi-Fi, much as Wi-Fi-only tablets get used.

Already, in most developed nations, 80 percent to 95 percent of the time, smart phone users are in zones where Wi-Fi can be the primary Internet connection, when they use the Internet.

AI Wiill Indeed Wreck Havoc in Some Industries

Creative workers are right to worry about the impact of artificial intelligence on jobs within the industry, just as creative workers were r...