Monday, October 15, 2012

You Can't Turn Off the PSTN Without Answering Some Basic Questions

The Federal Communications Commission has a Technology Advisory Committee (TAC) that is supposed to recommend ways the Federal Communications Commission can help prepare for the day when the PSTN is "turned off." Like many other committees, the day to day work doesn’t get much attention.

Among the issues is setting a date for terminating the PSTN is ensuring there is virtually universal broadband service everywhere, since the IP alternatives will require broadband. There are lots of other issues, to be sure.

But many of the issues will involve the framework for handling “carrier of last resort issues” and how common carrier regulation is applied. In a market expected to feature multiple ubiquitous networks, should historic common carrier regulations be extended to other providers, or should less restrictive frameworks be used?

Beyond that, there are other issues, such as the financial backdrop against which regulations are applied. Universal service funding mechanisms, and high cost support, in turn tend to hinge on the amount of social surplus generated by the industry as a whole.

Once upon a time, high gross revenues and high profit margins made possible a fundning of USF from business customer services. Over the last few decades, that has changed, and funding has come to rely on per-line consumer funding, from both fixed network and mobile network services.

And there is not as much surplus as there once was. In fact, over time, all the mechanisms will likely have to rely on taxes of mobile customer services, rather than shrinking fixed network revenues.

In fact, one might plausibly argue that, in the future, taxes on broadband and mobile services will be the dominant funding sources, not fixed network voice services. On the surface, those might not be seen as issues.

You would be hard pressed to find a single quarter in any recent year when the likes of AT&T and Verizon Communications did not show steady revenue growth and relatively stable earnings, with the ability to pay dividends. That isn't to say all providers are in the same condition. From time to time, many providers have faced some distress.

But Craig Moffett, Bernstein Research analyst, has been a notable “bear” on business prospects for the large mobile service providers. He now calculates that AT&T and Verizon Wireless are not even earning a return above their cost of capital.

In other words, AT&T and Verizon now are already losing money, investing in networks and services that do not earn back the cost of the borrowed money driving the investments. But most of the problem comes from the wireline businesses, he argues.

AT&T and Verizon executives would disagree, of course. In part, Verizon argues, returns have been depressed recently because of heavy investment, both in the FiOS program and wireless upgrades, but the revenue impact of the sluggish economy. Over the long term, those issues will recede, Verizon argues.



One might argue that recent developments in the global telecom business suggest growing strains at the very least, non-viability of some business models, in some markets, and serious strain in others. 
One might infer from the "wholesale only" broadband access models used in Singapore, Australia and New Zealand, that facilities-based "very high speed access" is not a business most providers can afford to be in.
Instead, networks providing that access, are a functional monopoly, too expensive for more than one provide to attempt. 
In Europe, the European Commission seems seriously concerned that European facilities-based broadband providers might not be able to afford the next round of upgrades, and seem to be considering policies that would boost the financial return from new and massive investments. 
Does anybody anymore seriously think global growth will be lead by anything other mobile services?
At some level, whether formally stated or not, the profitability of fixed networks will be an issue in future discussions of how to shut off the old PSTN.

The current discussion within the European Community about the investment impact of “net neutrality” rules is not a new debate. In the wake of the passage of the Telecommunications Act of 1996, dominant U.S. fixed-line providers argued, successfully, that mandatory wholesale rules, providing deeply-discounted rates for wholesale customers, would severely discourage investment in optical facilities. And, in fact, Verizon's FiOS effort did not get into high gear until after the Federal Communications Commission approved such rules.

These days, the EC discussion revolves to a great extent around the impact “network neutrality” rules could have on incentives for broadband investment. Specifically, operators argue that restriction of services to “best effort only,” without the ability to create differentiated service plans involving quality of service measures, will be a significant disincentive to the high rates of investment EC officials would prefer to see.
Some will say the carriers are bluffing about requiring some path to revenue when investing in 100-Mbps or 1-Gbps access facilities. Some of us would disagree. The alternative is to invest in mobile facilities and applications instead.
In fact, some recent global estimates of market share suggest telcos globally are losing the consumer market share battle to cable companies. In fact, looking just at triple-play accounts, it appears cable operators have roughly 66 percent market share. In other words, telcos arguably are losing the market share battle in the consumer market. 
The point is pretty simple. If it appears telcos are losing ground in the consumer market, but dominating and growing the mobile market, and if revenue potential in fixed line network services appears to be waning, at some point it will be a wise executive indeed who decides mobility is really where resources and effort ought to be placed. 
Placing obstacles to a profitable return on massive new investments does not seem calculated to encourage operators to invest substantially more in fixed access networks. 

10-Gbps Wireless Access in 25 Years?

Observers disagree about how much bandwidth people will need, and how much will be available for sale, in various markets in the future. But some observers think the typical U.S. end user will have a breath-taking amount of access bandwidth.

Mobile devices will have the power of a supercomputer, argues Donald Newell, AMD Server CTO. To be more precise, a then-current smart phone will have more processing power than today’s servers, Newell argues. That isn’t even the most-surprising prediction. More startling, in all likelihood, is the notion that a typical wireless consumer will have access to 10 Gbps.

Cisco thinks terabytes is a reasonable expectation for U.S. consumers, in a couple of decades. Cisco's Dave Evans, Chief Futurist, thinks at-home consumers will have access to Evans says multi-terabit Internet connections.. "I could have an 8-terabit per second connection to my home," he says. "That's more connectivity to my home than most countries have."

As a result, the core networks will operate at petabit per second speeds, about 10 to the 15th power, about three orders of magnitude bigger than terabit networking," Evans says.

Phones will have more than a terabyte of local memory," adds Mark Lewis, chief strategy officer at EMC, who predicts that all of our digital information will be backed up over the cloud. "If I lose my phone, I can pick up a new one, enter my code word, and it will re-identify me and push all of my information out to my new device."

For wireless networks, typical speeds will be as high as 10 gigabits per second, as fast as the fastest optical core networks today, some would argue.

Bandwidth increases on that order of magnitude, at least in the wireless arena, will require more than spectrum allocation. It will require continued significant advances in signal coding and compression, with some likely changes in network architecture as well. Additional spectrum will help, but it is hard to see typical mobile users getting 10 Gbps without robust new developments in coding.

If not, using today’s technology, cell sites would be so small they would be virtually indistinguishable from a fixed connection, in which cases “mobility” would not be possible.

But 25-year horizons are not meaningful, and predictions for what the world will be like that far out almost always are incorrect. One might find more success betting against today’s 25-year predictions instead.

That is not to say Moore’s Law is repealed, or that users will stop demanding more bandwidth. It’s just that linear projections almost always are wrong, over the long term.

On a relatively immediate basis, though, some projections that can seem outlandish are directionally valid enough to support rational business planning. Netflix, for example, has supported its business by mailing DVDs to customers. It began doing so because there was at one point no way to support delivery over the Internet, even though its very name suggests that possibility.

Netflix CEO Reed Hastings claims that back when even cable modems and digital subscriber line were not available, “we took out our spreadsheets and we figured we’d get 14 Mbps to the home by 2012, which turns out is about what we will get.”

“If you drag it out to 2021, we will all have a gigabit to the home,” Hastings argues.

Still, Netflix took the rational route and did not build its revenue model on bandwidth that wasn’t available; it built on what was feasible at the time. Lots of application service providers based their businesses on inadequate bandwidth and server infrastructure in the early 2000s, and most failed because of those assumptions.

Now, lots of providers are about to make a business out of cloud computing, which is the same concept, but in an infrastructure environment that has changed dramatically.

Timing might not be everything, but it is close. For that reason, no rational executive can build a business today based on expectations of 10 Gbps consumer mobile connections. But the direction is clear enough.

Most Businesses Adopting IP Telephony Still Choose Phone Systems

A recent Forrester Research survey of 567 enterprise and smaller business users that already have adopted IP telephony shows that most buyers so far have chosen premises-based solutions.

Just four percent of respondents say they have adopted a "hosted" IP telephony service. Another four percent reported they had adopted a "telephony as a service solution. About five percent said their IP telephony solution was outsourced. Taking all three as a group, just 13 percent of IP telephony solutions were hosted, cloud-based or outsourced.

That might make a great deal of sense. The economics of IP telephony tend to suggest that small users can benefit from hosted or cloud-based solutions, while enterprises often can justify owning their own solutions.

The study lends credence to the cable operator strategy of targeting businesses with 20 or fewer employees, as those are the venues where the economics of buying a service are best, compared to buying a premises-based solution.

About 71 percent said their IP telephony solutions were self maintained, while 16 percent said they owned their solution, but that it also was managed by a third party.

Respondents were more likely to use hosted web conferencing solutions. Some 18 percent reported they use a hosted solution, 25 percent said they used a "conferencing as a service," nine percent said their solution was on premises, but managed, and seven percent used an outsourced solution of some sort. About 40 percent of respondents said they maintained their own solution.

Forrester Research defines an enterprise as any entity with 1,000 or more employees, while a small or medium business is defined as an entity with 20 to 999 employees.


Still, others argue that digital voice will be the fastest-growing U.S. industry in the next five years, and that mostly means voice services provided by cable companies, not Skype or Google Voice, according to a new report from IBISWorld. Cable companies are expected to gain 65 percent of all the revenue in the digital voice space until 2016.

About 80 percent of the revenue will come from consumers and very-small businesses, while enterprises account for about 20 percent of the revenue, the report suggests. But mobility services are a wild card. Although cable-delivered VoIP dominates the landscape today, tomorrow's growth might come from non-cable providers and mobile VoIP,  
IBIS predicts.

"Line Extensions" Will Drive At Least 1/2 of Telco New Revenues

About half of revenue growth over the next three years will come from new lines of business, telco executives believe. But most of that opportunity still consists of line extensions built on current capabilities, according to STL Partners

Existing core services might provide upside up to about nine percent, STL Partners reports. Vertical industry services have potential to provide as much as 10 percent of revenue growth.

Infrastructure services (wholesale services, essentially) might provide eight percent of growth. Allowing third parties to embed communications features into their apps might drive 10 percent of growth. Providing other services to third party app providers could represent as much as 12 percent of revenue growth over the next three years.

Telcos providing their own "over the top" apps might provide five percent of revenue growth.

The takeaway is not so much that half of potential new revenue will come from new lines of business, but more that each opportunity builds logically from what service providers already provide.



Just 4 Basic Business Models for Tier-One Service Providers

Just four fundamental business models lie ahead for global communications service providers, say researchers at Booz  & Company. Half the models essentially are wholesale in orientation; one requires global operations and one is the traditional model, but with operators moving further up the value chain.

The "Network Guarantor" model has network infrastructure providers operating in a wholesale mode, providing other retail providers network services.

The "Business Enabler" is a mixed model, including both retail broadband services as well as wholesale broadband, managed services, transaction and billing support, and platforms such as hosting and cloud computing.

The "Experience Creator" is closest to the current retail model used by most service providers globally. Experience creators will look to move up the telecom value chain and provide end-users, both consumers and business customers, with the ubiquitous connectivity they demand, with targeted applications, fresh content, and a distinctive experience, and with the ability to create and distribute their own content.

The "Global Multimarketer" model is a retail model, but requires global operations and scale beyond a single nation.  Already, more than 75 percent of telecom subscribers in regions such as Europe and the Middle East are owned by global operators,” Booz & Co. says.

There are challenges for service providers in all four scenarios. The "wholesale only, Network Guarantor" model offers a trade off. There is less sales, marketing and customer service cost, since the products are wholesale, sold only to retailers, not to actual end users. On the other hand, gross revenues likely are smaller, and profit margins also will tend to be lower than is typical for retail operations.

To the extent that executives are worried about being reduced to the role of "low margin dumb pipe providers," this model guarantees it. In this scenario, service providers supply connectivity and other network services to all other retail entities, who have the actual relationship with end users. Also, companies that want to "own the customer relationship" will find this model unattractive, since it abdicates that role.

The "Business Enabler" model has a mix of advanages and disadvantages. It r. etains the retail role, implying both higher sales, marketing and customer support costs, but also higher gross revenue potential and higher margins for those retail services. But this model also includes sale of infrastructure services to third parties, on a wholesale basis.
That means there is inherently a possibility of channel conflict, as the service provider essentially competes with its wholesale customers in the retail market.

The Experience Creator model is the closest to today's model, where a service provider sells retail services to end users. It offers the least dramatic changes to the current model, but arguably also offers the smallest chance of dramatic changes in overhead and operating costs.

Also, to the extent that there is risk in moving into new roles within the application space, there is the danger of failure. This model virtually requires a more-active role in content and applications delivery, a terrain not historically favorable for telcos.

The Global Multimarketer role is most logical for larger, well-capitalized firms with some ability to leverage a strong brand in additional markets. This strategy probably is not viable for small national firms with weak brand name assets and small market capitalization. This strategy also is likely to prove attractive only for firms with the ability to provide mobile services.

In many ways the Global Multimarketer and Experience Creator models are mutually exclusive. Operators too small to operate globally are likely confined to their internal national markets. Likewise, the Experience Creator and Network Guarantor models are mutually exclusive. Only the Business Enabler strategy might be used by large and small service providers operating locally or globally.

LTE Will be Used as a Subsitute for Fixed Service, In Many Cases

Though for the most part mobile voice services have been viewed as complementary to fixed voice, at least some providers have taken a direct "replacement" tack with their marketing. And that is likely to become more prevalent over time, as Long Term Evolution networks become more prevalent.

The classic historic example of a service marketed as an alternative to fixed line service is Cricket Wireless, which marketed itself as a local calling substitute for landline voice service, for more than a decade.

Since many observers have noted that a 4G wireless service might be a substitute for a fixed broadband connection, one wonders when, and if, one or more providers will try and carve out a niche for 4G as a "wireline replacement" service.

Clearwire has been the best example of that, up to this point. But Verizon Wireless clearly believes its Long Term Evolution network might be used in that way. Also, new provider FreedomPop also plans to introduce a fixed access alternative, to complement its new mobile broadband service.

Wireless likely will not be so workable a replacement for multi-person households and households that watch lots of online video. But 4G wireless might be a perfectly workable, or at least workable solution for single-person households, or households with unrelated persons, typically younger, who use about an average amount of data each month.

Some estimates peg "average" household consumption per month at about 12 gigabytes. But that is misleading. The "mean" or "average" includes consumption by very-heavy users who are a minority of all users. The "median" gives a better sense for "typical" usage.

According to Sandvine , for example, the median North American user consumed about four Gbytes on a fixed connection, monthly. If that remains roughly the case, then wireless is going to be a viable substitute product, for many.

To be sure, consumption tends to grow over time. But current typical consumption is not all that intensive.

The Future is Here: Users Have More than One Voice Provider

The historic notion that a consumer or business user of voice services will choose a single supplier seems already to be a reality. In the past, executives including Ken Paker, TDS VP, have argued that, in the future, most end users will have multiple voice providers.

In that analysis, maybe one provider supplies what we now think of as "fixed line voice." Perhaps another supplies mobile voice, while others provide PC voice, or voice within the context of Internet applications. You might argue that already is the way most people actually use voice services and applications.

U.S. mobile penetration, for example, now is at 101 percent, meaning there are more phones in service than there are people, in the United States. U.S. mobile penetration has hit 101 percent, meaning there are more mobile devices in use than there are people in the United States, according to the latest tally from CTIA-The Mobile Association.

At the same time, hundreds of millions of people routinely use over the top services such as Skype or Google Voice, in particular situations.

At the same time, a study recently estimated that 45 percent of smart phone users avail themselves of  over the top messaging apps, largely in addition to their use of carrier-provided text messaging. 


FTTH Business Case Worse than it Used to Be?

Is the investment case for fiber to the home networks getting more challenging? Yes,  Rupert Wood, Analysys Mason principal analyst, has argued.

A shift of revenue, attention and innovation to wireless networks is part of the reason.

But the core business case for triple-play services also is becoming more challenging as well.   

All of that suggests service providers will have to look outside the traditional end-user services area for sustainable growth.

Many believe that will have to come in the form of services provided to business partners who can use network-provided information to support their own commerce and marketing efforts.

Those partners might be application developers, content sites, ad networks, ad aggregators or other entities that can partner with service providers to add value to their existing business operations.   

Current location, type of device, billing capabilities, payment systems, application programming interfaces and communication services, storage services, profile and presence information might be valuable in that regard.   

Fiber to the home long has been touted by many as the "best," most "future proof" medium for fixed access networks, at least of the telco variety.

But not by all. Investment analysts, virtually all cable and many telco excutives also have argued that "fiber to the home" costs too much.

Over the last decade or so, though, something new has happened. Innovation, access, usage and growth have shifted to wireless networks. None of that is helpful for the FTTH business case.

That is not to say broadband access is anything but the foundation service of the future for a fixed-network service providers.

Fixed networks in all likelihood always will provide orders of magnitude more usable bandwith than wireless networks.   

The issue, though, is the cost of building new fiber networks, balanced against the expected financial returns. “FTTH is often said to be ‘future-proof’, but the future appears to have veered off in a different direction,” says  Rupert Wood, Analysys Mason principal analyst.

Regulatory uncertainty, the state of capital markets and executive decisions play a part in shaping the pace of fiber deployment. But saturation of end user demand now is becoming an issue as well.   

The basic financial problems include competition from other contestants, which lowers the maximum penetration an operator can expect. FTTH has to be deployed, per location. But services will be sold to only some percentage of those locations. There is a stranded investment problem, in other words.   
The other issue is that the triple-play services bundle is itself unstable. FTTH networks are not required to provide legacy voice services. In fact, the existing networks work fine for that purpose.

One can argue that broadband is needed to provide the next generation of voice (VoIP or IP telephony), but demand for fixed-line voice has been dropping for a decade. So far, there is scant evidence that VoIP services offered in place of legacy voice have raised average revenue per user.

Most observers would note the trend goes the other way: in the direction of lower prices.   And though entertainment video services offer a clear chance for telcos to gain market share at the expense of cable operators, there is at least some evidence that overall growth is stalling, limiting gains to market share wins.   

Broadband access also is nearing saturation, though operators are offering higher-priced new tiers of service that could affect ARPU at some point. So the issue is that the business case for FTTH has to be carried by a declining service (voice), a possibly-mature service (video) and a nearly-mature service (broadband access).   

And then there is wireless substitution. Fixed-line voice already is being cannibalized by mobile voice. Some observers now expect the same thing to start happening in broadband access, and many note new forms of video could displace some amount of entertainment video spending as well.   The fundamental contradiction is that continued investment in fixed-line networks, which is necessary over time, occurs in a context of essentially zero growth.   

Atlantic-ACM, for example, has predicted that U.S. wireline network revenue, overall, between now and 2015, will be flat at best. Compound annual growth rates, in fact, are forecast to be slightly negative, at about 0.3 percent. Where total industry revenue was about $345 billion in 2009. By 2015, revenue will be $337 billion, Atlantic-ACM predicts.

When Does Functional Separation or Structural Separation Make Sense?

Analysts at the IBM Institute for Business Value argue that some form of functional or structural separation offers the highest revenue growth potential for telcos in the developed world, with the highest returns from structurally-separated operations where network services are spun off into a wholly-separate company, and all retail providers buy services from the wholesale entity.

Those conclusions are not widely shared in the North American market, it is fair to say. But it remains to be seen whether the analysis might someday make more sense.

The worst returns come from a strategy of sticking with the closed, voice-centric model, with better returns if operators are able to partner with device, application and infrastructure providers to create new services.

But operators do best where they separate their network service operations from retail operations, either to allow retail units to concentrate on vertical markets and customer segments, or by a more-robust effort to attract third parties to use the network resources.

In other words, some forms of functional separation--running the network business separately from the retaill business--might make more sense, and lead to greater revenue success, than keeping the functions combined.

In some other cases, service providers might decide to essentially get out of the networks business, to focus on retail operations. That has seemed to make sense in Southeast Asia markets where the costs of next generation investment are deemed to offer financial reward less than if capital were deployed in other ways, especially in mobile services or out of region expansion.

In "Point Deployment" of UC the Wiser Choice?

There are lots of reasons why industry buzz words change, just as there are lots of reasons why much-touted products fail to achieve immediate success. Consider "unified communications" and "collaboration."

Sometimes the problems are much-tougher than a "simple" technology substitution might imply. And that should raise immediate questions--and suggest obvious answers--for service providers selling unified communications, IP telephony and collaboration services to end users.  

So here's your test: ask yourself whether you could still get your job done if you did not have access to an IP phone, unified messaging, find-me/follow-me, visual voice mail, desktop conferencing or even the ability to launch communications from inside your organization's key business processes and software? 

I suspect our answers would be the same. It might take longer to get things done, but they would get done. But ask whether you could get your job done without any Internet access at all, without a mobile phone, without a phone of any type. 

Somewhere in here, many of us would begin to say "no, we can't get our jobs done." So that's the issue with unified communications and collaboration: how much is "nice to have" and how much is "need to have"? 

And since "wants" become "needs" over time, how do you position a "needed" service rather than a "wanted" feature. In fact, the difference between wants and needs probably is why lots of organizations have deployed some elements, but not a complete unified communications or collaboration capability. 

That might suggest a key reason why "point" deployment of UC and collaboration has made more sense than the grand vision. That, and cost. 

The problem with grand information technology visions is how expensive they are, how difficult they are to implement and how often they fail. IT architects likely are justifiably cautious about grand visions for this reason. 

For observers who would argue that unified communications has been something of a tough sell, that might be one reason for slower than expected adoption. "To realize the full benefits of UCC requires a major change management effort spanning not only technology but people and processes," says Jim Mortleman, Silicon.com writer. 

"In addition, a confusing array of technologies, approaches and vendors, as well as a current lack of standards, leads many to conclude it's better to wait until the market settles down before placing their bets," he has said. 

"Most CIOs I know are considering  projects rather than doing them," says Nick Kirkland, CIO Connect  

Still, many would say that human behavior plays some significant role in shaping or slowing deployments as well. 

To the extent that UC changes the way people can communicate, it may also require change of the way people do communicate. It isn't simply the technological means, but the broader potential shift within an organization of relations between people. 

I suspect it remains true that some organizations are more trusting than others. Some have greater confidence in flatter, more open organizational styles. Some organizations talk about "empowering people," but don't act that way. 

Others do. I suspect any grand vision for collaboration and UC will provide greatest leverage only within organizations that can leverage new capabilities because they have the human abilities. 

"It's not a matter of tools, it's a matter of working processes, the way individuals communicate, the way they're managed, how the organization is structured, the mechanics of workflow and so on," says Rob Bamforth, principal analyst at Quocirca.

"All of these things have to be understood first," he says. 

If those are key prerequisites, it should be obvious why collaboration and UC visions have been adopted more slowly than one might have projected. 

Ultimately, most companies aren't going to reengineer their processes just to accommodate some new technology, one might argue. 

And if that is the case, incrementalism and point solutions will be easier to adopt than a whole new UC or collaboration architecture. That is especially true for smaller organizations, which in fact have no grand vision and are going to be eminently practical about tools they can use now without requiring lots of training or disrupting the way they do things. 

"Faster," "easier," "better" or "cheaper" are likely to be the winning approaches to establishing value. If travel expenses are the problem, then videoconferencing could be the solution. 

No grand change is required, just a simple adoption of a new tool to replace another. Even there, though, some users report there are human issues beyond use of desktop conferencing technology, for example. 

The mechanics of scheduling meetings and social etiquette around meetings can become issues. Many of the same issues likely occur when introducing other UC and collaboration tools as well. 

So the upshot is that if one is committed to a grand move to enterprise-wide UC and collaboration, one had better also have absolute top-level commitment to push through all of the human changes that will be required, and also already possess a business culture that is open, participatory, flexible and amenable to change. Failing that, the vision will fail outright, or simply provide some incremental benefits whose cost far outweighs the gains.

How Many Carriers are Sustainable, for U.S. Mobile Business?

What is a sustainable market share structure for the U.S. mobile business? Nobody believes the mobile business will be as competitive as retail apparel, for example. But few probably believe a duopoly or monopoly is a good idea, either. The issue is what market structure might yield the greatest amount of competition. Some might call that a contestable market.

And though economic analysis normally assumes that it is the interests of buyers that matter, the interests of providers also matter. If a provider cannot stay in business, it cannot provide competition. So the structure of markets, in particular the market’s ability to sustain multiple players over time, does matter.

In fact, some might argue that if a greater degree of market share could be garnered by T-Mobile USA and Sprint, that would be a pro-competitive move.

And, to be sure, most observers no longer believe, as once was the prevailing view, that telecommunications is a natural monopoly


But neither do most observers believe an unlimited number of facilities-based providers can exist, either. So the question is what number of leading providers are sustainable, over the long term, providing both reasonable competition and reasonable financial sustainability. 

As a rough rule of thumb, assuming market share relates in some relatively linear way to profits, some of us would argue that market share rank affects profit margin, and to a large extent the magnitude of profits, by about a 50-percent rule. In other words, whatever the profits of the number-one provider, the number-two provider (by market share) has about half the level of profits as the number-one provider.

The number-three provider would have roughly half the profits of the number-two provider; while the number-four provider would have roughly half the profits of the number-three provider. It's a rough rule of thumb, but tends to be a workable hypothesis in most markets. 

In a business with high fixed costs, the number of providers that can exist will be smaller than the number of competitors in markets with low fixed costs. And there is no doubt but that mobile networks have high fixed costs. Not so high as fixed networks, but substantial, nevertheless. 

That does raise the issue of how many competitors can remain in business, long term. The existence of wholesale makes a difference, as many competitors might carve out niches based on use of wholesale mechanisms, which have the practical advantage of offering lower effective fixed costs. 

So a reasonable person might argue for a sustainable market structure featuring a small number of facilities-based retailers, plus some number of wholesale-only providers, and then many small retailers using those wholesale facilities. In any case, one would expect a rather limited number of leading providers.
Economic models are all about the assumptions, and that applies to analyses of what should happen as additional spectrum is made available to U.S. wireless providers. Specifically, policymakers looking after the "public welfare" must make choices that could affect the amount of consumer benefit.

The problem, as with virtually everything in the global mobile business or the global fixed network business, is the business terrain between monopoly on one hand and multiplicity on the other. Most policymakers globally have concluded that monopoly is, in fact, a poor way to encourage innovation, efficiency and lower prices.

On the other hand, a simple spreadsheet exercise will be enough to convince anyone that the mobile or fixed network communications business, when conducted in a facilities based way, simply cannot support lots of contestants.

Whatever you might suppose total demand is, when multiple providers start to divide up that demand, markets can become ruinous, meaning no contestant gets enough market share and revenue to sustain itself.

The  Phoenix Center for Advanced Legal & Economic Public Policy Studies long has argued that the sustainable number of network-based contestants in either the wireless or fixed network business will be limited to just a few firms, for this reason. 




Phoenix Center Chief Economist George Ford now argues that consumers actually would be better off if any future wireless spectrum auctions allow all wireless providers to bid, rather than trying to ensure that spectrum assets are allocated more broadly.

This might seem counter-intuitive. If competition is better than a monopoly, shouldn't broader spectrum awards create more competition, and therefore lead to more innovation and lower retail prices?

That's the argument the Phoenix Center takes on in a study. There are two key assumptions.

"First, we assume that price falls as the number of competitors increases (e.g., the Hirschman Herfindahl Index or “HHI” falls)," says Ford. "More formally, we assume Cournot Competition in Quantities."

In other words, the Phoenix Center uses the same framework as the  the Federal Communications Commission and the Department of Justice, where it comes to assessing market concentration and the impact of competition on retail prices.

A second key assumption is important, though. The Phoenix Center does not assume the amount of capacity from spectrum is not linearly related to the amount of spectrum a firm has.

That is, if we double the amount of spectrum, then the capacity provided to a firm from that additional spectrum more than doubles. That might be a head turner, at first. After all, are we not dealing here with laws of physics?

My apologies to Dr. Ford if I misapply the assumption, but here's how I'd explain it.
Yes, laws of physics do apply. But wireless networks routinely "re-use" spectrum. A single physical allotment can be used repeatedly across a network, with a primary determinant being the coverage size of each cell. Lots of smaller cells can use a single amount of frequency more efficiently than a few big cells.

But cutting the cell radius by 50 percent quadruples the number of required cells. And since each cell represents more capital investment, you see the issue. Spectrum does not linearly relate to effective end user bandwidth. The amount of actual bandwidth a network can provide is related to the amount of spectrum re-use.

"Richer" providers can better afford to create the denser smaller cell networks, so can provide more bandwidth from a fixed amount of spectrum.

Wireless Competition Under Spectrum Exhaust provides the detailed model, but the point is that a smaller number of new spectrum recipients creates more effective end user bandwidth than a larger number of new recipients. That seems counter to reason, and the analysis is important for suggesting the "common sense" understanding is wrong.

The important public policy implication is that rules to "spread the spectrum awards to more providers" has a negative impact on end user pricing. In fact, a more concentrated distribution should lead to increases in supply that more effectively lead to lower prices.

It is not what most might assume is the case. The policy implication is that it is not helpful to restrict the ability of any contestants, especially the dominant contestants, from acquiring more spectrum in new auctions.

One might note that bidding rules in some countries, such as Germany, do in fact limit the amount of spectrum the dominant providers can acquire. Though the Phoenix arguments are about upcoming policy for U.S. spectrum auctions,  the same analysis should apply in all markets.



The issue at hand for antitrust regulators at the Department of Justice, when evaluating the consumer impact of an AT&T acquisition of T-Mobile USA, is whether the deal would exceed a rule of thumb about market concentration. Some would argue that, no matter what happens with this particular deal, that concentration in the mobile business will continue. See DoJ guideline or more on the algorithm

Is Private Equity "Good" for the Housing Market?

Even many who support allowing market forces to work might question whether private equity involvement in the U.S. housing market “has bee...