Monday, May 20, 2013

How Big a Revenue Boost from LTE?


Whenever a next-generation mobile network replaces an older network, there typically is room for both product substitution that does not dramatically affect total revenues, and incremental revenue lift, initially from higher prices, and later from new services.


Juniper Research forecasts Long Term Evolution network subscribers will double from an estimated 105 million subscribers in 2013 to nearly 220 million in 2014.

What that means in terms of incremental revenue is less clear, though many service providers are using the LTE rollout as an opportunity to raise data plan prices. In many markets, 4G data plans will simply cannibalize 3G plans, with some incremental revenue lift if operators are able to charge a 4G pricing premium.

That might be more the case in developing markets, where 3G cost premiums over 2G rates were quite significant.

But market conditions might lessen the amount of price premium possible in particular markets. In many markets, 4G tariffs had to be lowered, or usage buckets increased, while price remained constant,  because of market conditions.

And some competitors have simply chosen not to charge a premium for LTE access.

In some cases, consumers think the 4G prices are too high.


That is not to say new applications are unimportant. It might turn out that revenue lift occurs for indirect reasons, such as users consuming more mobile data as appetites for mobile video entertainment consumption continue to grow.

It also is possible more consumers will start using tethering features for their tablets and PCs, which likewise will increase consumption. The point is that, even in the absence of new apps, mobile service providers should see incremental revenue from 4G.

But the “gross revenue” figures we will be seeing will have to be weighed against the cannibalization of 3G data revenues.

“To some extent, 4G may not impact mobile innovation the way 3G did,” observes Dan Hays, PwC US Wireless Advisory Leader. “We may be more likely to see second order effects from 4G rather than new things enabled by the technology itself.”

In other words, there might not be as much application innovation as some believe, nor might the revenue lift revenue lift be as significant as some believe.

“I believe 4G will enable operators to deliver a more consistent experience, more ubiquitously,
at a lower cost and allow them to make money and stay in business,” said Hays. That sounds a bit like the upside from fiber to home networks.

There is some revenue upside, particularly from video entertainment services. But much of the benefit comes from “future proofing” or lower operating and repair costs. Lower costs per bit is one advantage, for example.

Saturday, May 18, 2013

Two Orders of Magnitude More Access Speed Within 10 Years? History Says "Yes"

If history is any guide,  gigabit Internet access will not be at all rare in a decade, though how much demand for 1-Gbps might well hinge on retail pricing levels.

In August 2000, only 4.4 percent of U.S. households had a home broadband connection, while  41.5 percent of households had dial-up access. A decade later, dial-up subscribers declined to 2.8 percent of households in 2010, 68.2 percent of households subscribed to broadband service.

If you believe gigabit access is to today’s broadband as broadband was to dial-up access, and you believe the challenge of building gigabit networks roughly corresponds to the creation of broadband capability, a decade might be a reasonable estimate of how long it will take before 70 percent of U.S. homes can buy gigabit access service, and possibly 50 percent do so.

Consider that by June 2012 about 75 percent of U.S. households could buy a service of at least 50 Mbps, while half could buy service at 100 Mbps. So it took about a decade to put into place access at two orders of magnitude higher than the baseline (dial-up speeds).

The key distinction is between “availability” and “take rate.” Even though consumers are starting to buy faster access services, most seem to indicate, by their buying behavior, that 20 Mbps or 25 Mbps is “good enough,” when it is possible to buy 50 Mbps or 100 Mbps service.

In the U.K. market, for example, though service at 30 Mbps is available to at least 60 percent of homes,  buy rates were, in mid-2012, at about seven percent (to say nothing of demand for 100 Mbps).  

The European Union country with the highest penetration of such services was Sweden, at about 15 percent, in mid-2012.

To be sure, retail prices are an issue. With relatively few exceptions, U.S. consumers tend to buy services up to 25 Mbps, and price for a gigabit service is probably the main reason.

That is the reason behind Google Fiber's disruptive pricing for gigabit access at $70 a month. That pricing umbrella implies far lower prices for 100 Mbps service than consumers can buy at the moment.

And that is every bit as important as the headline gigabit speed. If a gigabit connection costs $70 a month, will a 100-Mbps connection cost $35 a month?

Friday, May 17, 2013

Fixed Networks are for "Capacity," Mobile Networks are for "Coverage"


These days, in many markets, people using smart phones are on the fixed networks for Internet access, more than on the mobile network. 

In North America, as much as 82 percent of Internet traffic from smart phones occurs indoors, where users can take advantage of Wi-Fi, instead of the mobile network,

suggests.


In Western Europe, as much as 92 percent of Internet usage from smart phones occurs indoors.

So to a large extent, that means the fixed network provides “capacity,” while the mobile network provides “coverage,” a statement that describes the two ways a small cell can provide value for an urban mobile network as well.

For the most part, Wi-Fi offload happens mostly in the office and the home. Some small cells will include Wi-Fi access, but the volume of Internet activity still occurs indoors, not outdoors where small cells will reinforce the mobile macrocell network.

Some tier one carriers have moved to create their own networks of public Wi-Fi hotspots, and many can serve customers from owned fixed networks as well. That makes the fixed network and public Wi-Fi a direct part of the mobile network.

In other cases, carriers simply passively allow their devices to use non-affiliated Wi-Fi networks, as when a mobile service provider allows a user to roam onto a fixed network owned by another service provider. 

That is one more example of the loosely-coupled nature of the Internet ecosystem. A mobile provider can offload traffic to another carrier with which it has no formal business relationship. 








Thursday, May 16, 2013

Will TV White Spaces Be Important?


Whether TV white spaces spectrum is going to be a big deal or not might hinge on how much real spectrum is available in given markets, plus manufacturing volume to get retail prices of network elements down to a level where the spectrum has a reasonable business case.

At a high level, it isn’t going to help as much in urban areas, where interference issues are more constraining.

It might prove quite important in some rural areas where is a lot more bandwidth because there are enough people living in a region to create incentives for TV broadcasters. In areas where few people live, there might be lots of bandwidth, just few potential users or customers. Every location is different.

While at least 6 megahertz is available throughout most of the United States, there are a few locations where there is much more spectrum available. Though most of the spectrum cannot be used in most locations, the white spaces band includes 150 MHz of spectrum in total.  


Other sources of lightly-regulated or unlicensed spectrum might be made available in the future. And new technologies such as agile radio and ultra-wideband technologies are available, but regulator action is required to enable use of such technologies.

And though the general rule has been that spectrum is licensed to entities for specific purposes, unlicensed spectrum might be crucial.

"Unlicensed" Spectrum Doesn't Always Mean "You Can Use it" Without Paying


It sometimes is easy to forget that it isn’t as easy to become an ISP in some nations, as in others. Consider the matter of “unlicensed spectrum,” for example.

“Unlicensed” spectrum exists. But use of that spectrum is, in about 66 percent of nations, is not really non-licensed. Based on responses from 75 countries, 33 percent of national regulators require a license to use 2.4, or 5 GHz “unlicensed” spectrum, a study found.

Another 33 percent of national regulators require obtaining an operating license, though not a spectrum license. About 33 percent do not require a license of any type. In a small fraction of cases (two percent) use is forbidden.

"Mix and Match" is one Advantage of Software Defined Networks


If you wanted to rip cost out of any device, app or network, what are some of the things you’d do? You’d remove parts. You’d probably reduce size.

Shoretel, for example, sells docking stations for iPhones and iPads, for example, that allow the iOS devices to act as the CPU, when the docking station providing all the peripheral support.



And that's as good an example as any of how a network designer would try and wring cost out of a network.


You’d rely as much as possible on functionality that could be supplied by other common CPUs. 

You’d pick passive solutions, not active solutions, as often as possible. You’d simplify with an eye to improving manufacturing tasks.

You also would create a “mix and match” capability about specific makes and models of network gear. You’d want to be able to use a variety of network elements, made by many different suppliers, interchangeably.

You’d create a network using common, commodity-priced devices as much as possible.

In other words, you would make devices, networks and apps "as dumb as possible," and as flexible and interchangeable as possible.

If you think about software defined networks, that’s an application of the principle. Not “everything” about SDN is “as dumb as possible;” only the data plane elements.

The corollaries are that such approaches create networks that also are “as flexible as possible” and “as affordable as possible.”

The control plane you would still want to be as “smart as possible,” and you would be able to afford to do so, since the key to much cost reduction is the ability to share a physical resource across a broad number of end users, subtending devices or nodes.

That is why the per-user or per-customer cost of an expensive headend is rather low, as a percentage of the total cost of a network. On the other hand, the total cost of smart CPUs (decoders) used by every end user or customer is so high because there is so little sharing possible: each customer needs one or more decoders.

That was what drove cable operator Cablevision Systems to adopt a “network digital video recorder” strategy. By centralizing the CPU functions, the ability to share the cost of processing infrastructure was vastly improved.

The broader principle is that one proven way to reduce cost, increase flexibility and enhance functionality is to separate and centralize control plane (CPU) functions from the data plane functions that are widely distributed throughout a network.

That’s the whole point of software defined networks.


What Does Your Business Look Like if Access Bandwidth is Not a Constraint?

There is one thread that underlies thinking and business strategy at firms as disparate as Google, Netflix and Microsoft, namely Moore's Law. Even if communications does not scale in the same way as memory and processing, Moore’s Law underpins progress on the communications front, at least in terms of signal compression, the power of network elements and cost of those elements and systems built on those building blocks.  


As Intel CEO Paul Otellini tells the story, Moore’s Law also implied an inverse relationship between volume and price per unit. Over time, processing and memory got more powerful and cheaper in a linear way.


The implication for Intel was that it would have to shift from producing small numbers of components selling for high prices to a market where very large numbers of very cheap components was the context of the business. “Towards ultra-cheap” is one way to describe the progression of retail prices.

You might argue that assumption also drove Microsoft’s decisions about its software business (“what does my business look like if computing hardware is very cheap?”), the confidence Netflix had that broadband would support high-quality streaming (“Will access bandwidth be where it must to support our streaming business?”) and the many decisions Google makes about the ability to support software-based businesses using advertising.

You might argue that the emergence of cloud computing is reshaping existing content and software businesses precisely because of the question “what would my business look like if access bandwidth were not a constraint?”

For Intel, the implications were a radical change in component pricing, reflected back into the way the whole business has to be organized.


Ubiquiti illustrates a related principle, namely the role of disruptive pricing in a market. Ubiquiti has operating expense in single digits where a traditional technology supplier has operating expense in the 30 percent to 60 percent range.


That allows Ubiquiti to sell at retail prices competitors cannot easily match.

source Justin Sullivan/Getty Images

DIY and Licensed GenAI Patterns Will Continue

As always with software, firms are going to opt for a mix of "do it yourself" owned technology and licensed third party offerings....