Monday, October 15, 2012

The Future is Here: Users Have More than One Voice Provider

The historic notion that a consumer or business user of voice services will choose a single supplier seems already to be a reality. In the past, executives including Ken Paker, TDS VP, have argued that, in the future, most end users will have multiple voice providers.

In that analysis, maybe one provider supplies what we now think of as "fixed line voice." Perhaps another supplies mobile voice, while others provide PC voice, or voice within the context of Internet applications. You might argue that already is the way most people actually use voice services and applications.

U.S. mobile penetration, for example, now is at 101 percent, meaning there are more phones in service than there are people, in the United States. U.S. mobile penetration has hit 101 percent, meaning there are more mobile devices in use than there are people in the United States, according to the latest tally from CTIA-The Mobile Association.

At the same time, hundreds of millions of people routinely use over the top services such as Skype or Google Voice, in particular situations.

At the same time, a study recently estimated that 45 percent of smart phone users avail themselves of  over the top messaging apps, largely in addition to their use of carrier-provided text messaging. 


FTTH Business Case Worse than it Used to Be?

Is the investment case for fiber to the home networks getting more challenging? Yes,  Rupert Wood, Analysys Mason principal analyst, has argued.

A shift of revenue, attention and innovation to wireless networks is part of the reason.

But the core business case for triple-play services also is becoming more challenging as well.   

All of that suggests service providers will have to look outside the traditional end-user services area for sustainable growth.

Many believe that will have to come in the form of services provided to business partners who can use network-provided information to support their own commerce and marketing efforts.

Those partners might be application developers, content sites, ad networks, ad aggregators or other entities that can partner with service providers to add value to their existing business operations.   

Current location, type of device, billing capabilities, payment systems, application programming interfaces and communication services, storage services, profile and presence information might be valuable in that regard.   

Fiber to the home long has been touted by many as the "best," most "future proof" medium for fixed access networks, at least of the telco variety.

But not by all. Investment analysts, virtually all cable and many telco excutives also have argued that "fiber to the home" costs too much.

Over the last decade or so, though, something new has happened. Innovation, access, usage and growth have shifted to wireless networks. None of that is helpful for the FTTH business case.

That is not to say broadband access is anything but the foundation service of the future for a fixed-network service providers.

Fixed networks in all likelihood always will provide orders of magnitude more usable bandwith than wireless networks.   

The issue, though, is the cost of building new fiber networks, balanced against the expected financial returns. “FTTH is often said to be ‘future-proof’, but the future appears to have veered off in a different direction,” says  Rupert Wood, Analysys Mason principal analyst.

Regulatory uncertainty, the state of capital markets and executive decisions play a part in shaping the pace of fiber deployment. But saturation of end user demand now is becoming an issue as well.   

The basic financial problems include competition from other contestants, which lowers the maximum penetration an operator can expect. FTTH has to be deployed, per location. But services will be sold to only some percentage of those locations. There is a stranded investment problem, in other words.   
The other issue is that the triple-play services bundle is itself unstable. FTTH networks are not required to provide legacy voice services. In fact, the existing networks work fine for that purpose.

One can argue that broadband is needed to provide the next generation of voice (VoIP or IP telephony), but demand for fixed-line voice has been dropping for a decade. So far, there is scant evidence that VoIP services offered in place of legacy voice have raised average revenue per user.

Most observers would note the trend goes the other way: in the direction of lower prices.   And though entertainment video services offer a clear chance for telcos to gain market share at the expense of cable operators, there is at least some evidence that overall growth is stalling, limiting gains to market share wins.   

Broadband access also is nearing saturation, though operators are offering higher-priced new tiers of service that could affect ARPU at some point. So the issue is that the business case for FTTH has to be carried by a declining service (voice), a possibly-mature service (video) and a nearly-mature service (broadband access).   

And then there is wireless substitution. Fixed-line voice already is being cannibalized by mobile voice. Some observers now expect the same thing to start happening in broadband access, and many note new forms of video could displace some amount of entertainment video spending as well.   The fundamental contradiction is that continued investment in fixed-line networks, which is necessary over time, occurs in a context of essentially zero growth.   

Atlantic-ACM, for example, has predicted that U.S. wireline network revenue, overall, between now and 2015, will be flat at best. Compound annual growth rates, in fact, are forecast to be slightly negative, at about 0.3 percent. Where total industry revenue was about $345 billion in 2009. By 2015, revenue will be $337 billion, Atlantic-ACM predicts.

When Does Functional Separation or Structural Separation Make Sense?

Analysts at the IBM Institute for Business Value argue that some form of functional or structural separation offers the highest revenue growth potential for telcos in the developed world, with the highest returns from structurally-separated operations where network services are spun off into a wholly-separate company, and all retail providers buy services from the wholesale entity.

Those conclusions are not widely shared in the North American market, it is fair to say. But it remains to be seen whether the analysis might someday make more sense.

The worst returns come from a strategy of sticking with the closed, voice-centric model, with better returns if operators are able to partner with device, application and infrastructure providers to create new services.

But operators do best where they separate their network service operations from retail operations, either to allow retail units to concentrate on vertical markets and customer segments, or by a more-robust effort to attract third parties to use the network resources.

In other words, some forms of functional separation--running the network business separately from the retaill business--might make more sense, and lead to greater revenue success, than keeping the functions combined.

In some other cases, service providers might decide to essentially get out of the networks business, to focus on retail operations. That has seemed to make sense in Southeast Asia markets where the costs of next generation investment are deemed to offer financial reward less than if capital were deployed in other ways, especially in mobile services or out of region expansion.

In "Point Deployment" of UC the Wiser Choice?

There are lots of reasons why industry buzz words change, just as there are lots of reasons why much-touted products fail to achieve immediate success. Consider "unified communications" and "collaboration."

Sometimes the problems are much-tougher than a "simple" technology substitution might imply. And that should raise immediate questions--and suggest obvious answers--for service providers selling unified communications, IP telephony and collaboration services to end users.  

So here's your test: ask yourself whether you could still get your job done if you did not have access to an IP phone, unified messaging, find-me/follow-me, visual voice mail, desktop conferencing or even the ability to launch communications from inside your organization's key business processes and software? 

I suspect our answers would be the same. It might take longer to get things done, but they would get done. But ask whether you could get your job done without any Internet access at all, without a mobile phone, without a phone of any type. 

Somewhere in here, many of us would begin to say "no, we can't get our jobs done." So that's the issue with unified communications and collaboration: how much is "nice to have" and how much is "need to have"? 

And since "wants" become "needs" over time, how do you position a "needed" service rather than a "wanted" feature. In fact, the difference between wants and needs probably is why lots of organizations have deployed some elements, but not a complete unified communications or collaboration capability. 

That might suggest a key reason why "point" deployment of UC and collaboration has made more sense than the grand vision. That, and cost. 

The problem with grand information technology visions is how expensive they are, how difficult they are to implement and how often they fail. IT architects likely are justifiably cautious about grand visions for this reason. 

For observers who would argue that unified communications has been something of a tough sell, that might be one reason for slower than expected adoption. "To realize the full benefits of UCC requires a major change management effort spanning not only technology but people and processes," says Jim Mortleman, Silicon.com writer. 

"In addition, a confusing array of technologies, approaches and vendors, as well as a current lack of standards, leads many to conclude it's better to wait until the market settles down before placing their bets," he has said. 

"Most CIOs I know are considering  projects rather than doing them," says Nick Kirkland, CIO Connect  

Still, many would say that human behavior plays some significant role in shaping or slowing deployments as well. 

To the extent that UC changes the way people can communicate, it may also require change of the way people do communicate. It isn't simply the technological means, but the broader potential shift within an organization of relations between people. 

I suspect it remains true that some organizations are more trusting than others. Some have greater confidence in flatter, more open organizational styles. Some organizations talk about "empowering people," but don't act that way. 

Others do. I suspect any grand vision for collaboration and UC will provide greatest leverage only within organizations that can leverage new capabilities because they have the human abilities. 

"It's not a matter of tools, it's a matter of working processes, the way individuals communicate, the way they're managed, how the organization is structured, the mechanics of workflow and so on," says Rob Bamforth, principal analyst at Quocirca.

"All of these things have to be understood first," he says. 

If those are key prerequisites, it should be obvious why collaboration and UC visions have been adopted more slowly than one might have projected. 

Ultimately, most companies aren't going to reengineer their processes just to accommodate some new technology, one might argue. 

And if that is the case, incrementalism and point solutions will be easier to adopt than a whole new UC or collaboration architecture. That is especially true for smaller organizations, which in fact have no grand vision and are going to be eminently practical about tools they can use now without requiring lots of training or disrupting the way they do things. 

"Faster," "easier," "better" or "cheaper" are likely to be the winning approaches to establishing value. If travel expenses are the problem, then videoconferencing could be the solution. 

No grand change is required, just a simple adoption of a new tool to replace another. Even there, though, some users report there are human issues beyond use of desktop conferencing technology, for example. 

The mechanics of scheduling meetings and social etiquette around meetings can become issues. Many of the same issues likely occur when introducing other UC and collaboration tools as well. 

So the upshot is that if one is committed to a grand move to enterprise-wide UC and collaboration, one had better also have absolute top-level commitment to push through all of the human changes that will be required, and also already possess a business culture that is open, participatory, flexible and amenable to change. Failing that, the vision will fail outright, or simply provide some incremental benefits whose cost far outweighs the gains.

How Many Carriers are Sustainable, for U.S. Mobile Business?

What is a sustainable market share structure for the U.S. mobile business? Nobody believes the mobile business will be as competitive as retail apparel, for example. But few probably believe a duopoly or monopoly is a good idea, either. The issue is what market structure might yield the greatest amount of competition. Some might call that a contestable market.

And though economic analysis normally assumes that it is the interests of buyers that matter, the interests of providers also matter. If a provider cannot stay in business, it cannot provide competition. So the structure of markets, in particular the market’s ability to sustain multiple players over time, does matter.

In fact, some might argue that if a greater degree of market share could be garnered by T-Mobile USA and Sprint, that would be a pro-competitive move.

And, to be sure, most observers no longer believe, as once was the prevailing view, that telecommunications is a natural monopoly


But neither do most observers believe an unlimited number of facilities-based providers can exist, either. So the question is what number of leading providers are sustainable, over the long term, providing both reasonable competition and reasonable financial sustainability. 

As a rough rule of thumb, assuming market share relates in some relatively linear way to profits, some of us would argue that market share rank affects profit margin, and to a large extent the magnitude of profits, by about a 50-percent rule. In other words, whatever the profits of the number-one provider, the number-two provider (by market share) has about half the level of profits as the number-one provider.

The number-three provider would have roughly half the profits of the number-two provider; while the number-four provider would have roughly half the profits of the number-three provider. It's a rough rule of thumb, but tends to be a workable hypothesis in most markets. 

In a business with high fixed costs, the number of providers that can exist will be smaller than the number of competitors in markets with low fixed costs. And there is no doubt but that mobile networks have high fixed costs. Not so high as fixed networks, but substantial, nevertheless. 

That does raise the issue of how many competitors can remain in business, long term. The existence of wholesale makes a difference, as many competitors might carve out niches based on use of wholesale mechanisms, which have the practical advantage of offering lower effective fixed costs. 

So a reasonable person might argue for a sustainable market structure featuring a small number of facilities-based retailers, plus some number of wholesale-only providers, and then many small retailers using those wholesale facilities. In any case, one would expect a rather limited number of leading providers.
Economic models are all about the assumptions, and that applies to analyses of what should happen as additional spectrum is made available to U.S. wireless providers. Specifically, policymakers looking after the "public welfare" must make choices that could affect the amount of consumer benefit.

The problem, as with virtually everything in the global mobile business or the global fixed network business, is the business terrain between monopoly on one hand and multiplicity on the other. Most policymakers globally have concluded that monopoly is, in fact, a poor way to encourage innovation, efficiency and lower prices.

On the other hand, a simple spreadsheet exercise will be enough to convince anyone that the mobile or fixed network communications business, when conducted in a facilities based way, simply cannot support lots of contestants.

Whatever you might suppose total demand is, when multiple providers start to divide up that demand, markets can become ruinous, meaning no contestant gets enough market share and revenue to sustain itself.

The  Phoenix Center for Advanced Legal & Economic Public Policy Studies long has argued that the sustainable number of network-based contestants in either the wireless or fixed network business will be limited to just a few firms, for this reason. 




Phoenix Center Chief Economist George Ford now argues that consumers actually would be better off if any future wireless spectrum auctions allow all wireless providers to bid, rather than trying to ensure that spectrum assets are allocated more broadly.

This might seem counter-intuitive. If competition is better than a monopoly, shouldn't broader spectrum awards create more competition, and therefore lead to more innovation and lower retail prices?

That's the argument the Phoenix Center takes on in a study. There are two key assumptions.

"First, we assume that price falls as the number of competitors increases (e.g., the Hirschman Herfindahl Index or “HHI” falls)," says Ford. "More formally, we assume Cournot Competition in Quantities."

In other words, the Phoenix Center uses the same framework as the  the Federal Communications Commission and the Department of Justice, where it comes to assessing market concentration and the impact of competition on retail prices.

A second key assumption is important, though. The Phoenix Center does not assume the amount of capacity from spectrum is not linearly related to the amount of spectrum a firm has.

That is, if we double the amount of spectrum, then the capacity provided to a firm from that additional spectrum more than doubles. That might be a head turner, at first. After all, are we not dealing here with laws of physics?

My apologies to Dr. Ford if I misapply the assumption, but here's how I'd explain it.
Yes, laws of physics do apply. But wireless networks routinely "re-use" spectrum. A single physical allotment can be used repeatedly across a network, with a primary determinant being the coverage size of each cell. Lots of smaller cells can use a single amount of frequency more efficiently than a few big cells.

But cutting the cell radius by 50 percent quadruples the number of required cells. And since each cell represents more capital investment, you see the issue. Spectrum does not linearly relate to effective end user bandwidth. The amount of actual bandwidth a network can provide is related to the amount of spectrum re-use.

"Richer" providers can better afford to create the denser smaller cell networks, so can provide more bandwidth from a fixed amount of spectrum.

Wireless Competition Under Spectrum Exhaust provides the detailed model, but the point is that a smaller number of new spectrum recipients creates more effective end user bandwidth than a larger number of new recipients. That seems counter to reason, and the analysis is important for suggesting the "common sense" understanding is wrong.

The important public policy implication is that rules to "spread the spectrum awards to more providers" has a negative impact on end user pricing. In fact, a more concentrated distribution should lead to increases in supply that more effectively lead to lower prices.

It is not what most might assume is the case. The policy implication is that it is not helpful to restrict the ability of any contestants, especially the dominant contestants, from acquiring more spectrum in new auctions.

One might note that bidding rules in some countries, such as Germany, do in fact limit the amount of spectrum the dominant providers can acquire. Though the Phoenix arguments are about upcoming policy for U.S. spectrum auctions,  the same analysis should apply in all markets.



The issue at hand for antitrust regulators at the Department of Justice, when evaluating the consumer impact of an AT&T acquisition of T-Mobile USA, is whether the deal would exceed a rule of thumb about market concentration. Some would argue that, no matter what happens with this particular deal, that concentration in the mobile business will continue. See DoJ guideline or more on the algorithm

U.S. Mobile Penetration Hits 101%

U.S. mobile penetration has hit 101 percent, meaning there are more mobile devices in use than there are people in the United States, according to the latest tally from CTIA-The Mobile Association. Among the headline findings:


There are some 321.7 million U.S. mobile subscriber connections, representing 101 percent penetration. 

In June 2012 there were 306 million subscriptions(a five percent increase).

There was in 2011 a 37 percent increase in smart phone devices, with 130.8 million smartphones or wireless-capable PDAs now active, up from 95.8 million in June 2011.

There are 300.4 million active data-capable devices, up from 278.3 million in June 2011, and  21.6 million mobile-enabled tablets, laptops and modems, up 42 per cent from the 15.2 million in June 2011.

Mojiva Launches Tablet-Only Ad Network

Mojiva, the mobile advertising network, is launching Mojiva Tab, an ad network aimed at inventory that will only be delivered on tablets. The expectation is that Mojiva Tab will gain in the market by focusing exclusively on the tablet format, with dedicated resources and staff, rather than supporting both smart phone and tablet campaigns from one set of resources. 

Some will see that as a significant and warranted change. Marketing specialists often note that the form factor and authoring requirements of tablets are substantially different from desktop PC and smart phone platforms. 

Zoom Wants to Become a "Digital Twin Equipped With Your Institutional Knowledge"

Perplexity and OpenAI hope to use artificial intelligence to challenge Google for search leadership. So Zoom says it will use AI to challen...