Monday, February 8, 2016

Better to be "Lucky" Than "Good" When Stimulating Competition

“Better to be lucky than good (or smart),” an adage suggests. Some might say that also applies to the main thrust of communications policy in the U.S. and, perhaps, other markets, which is to encourage competition.

One might argue that the Telecommunications Act of 1996, the most-sweeping revision of U.S. communications framework since 1934, failed, and yet competition has grown. The Internet is the reason.

Where the Act envisioned unbundling and voice competition, we now are in an Internet era where Internet-based apps and services now are at the forefront, not voice.

In other words, “competition in voice services” essentially was the problem to be solved, just at the point that the rise of the Internet was to make all that strategically unimportant.


U.S. policy further has had the emphasis of encouraging facilities-based competition, which initially took the form of assuming unbundling and wholesale would create the basis for sustainable investment in new platforms.

The apparent model was experience with long distance service, which initially took the form of unbundling, then lead to facilities investment.

Perhaps nobody could have foreseen the rise of the Internet and the collapse of the independent long distance market, which once was the cash cow for the whole industry, but now exists largely as a “feature,” not a product.

So far, what has happened is that cable TV networks were upgraded and repurposed to function as full-service telecom networks, and that much demand has shifted to mobile networks.

That is the "luck" part of policy. To the extent competitive policies have worked, it has been because of unforseen developments, not the direct result of direct policy intervention.

Local competition in the U.S., it turns out, was not the result of new entrants constructing new plant, but from the repurposing of the embedded cable television plant and the migration of many households to the exclusive use of mobile wireless services,” argue George Ford, Phoenix Center for Advanced Legal & Economic Public Policy Studies chief economist and Larry Spiwak, Phoenix Center president.

Unbundling has arguably not lead to much new facilities investment outside cable TV and mobility, with one possibly important new exception: Google Fiber, which has spurred a rethinking of possibility for sustainability of many smaller Internet service providers.

Still, it is noteworthy that local access no longer is considered a “natural monopoly” in the U.S. market.

But the larger issue is what to do next.

How Does Network Density Affect Old Debate About Use of Wi-Fi as a Rival Access Platform?

Time and Moore’s Law can change the strategic context within which decisions about business strategy are made.

Many note how the cost to start an app company have declined by an order of magnitude or more since the turn of the century, a direct reflection of Moore’s Law improvements as well as the commercialization of cloud computing platforms. Since 1992, for example, the cost of computing or storage has declined by five orders of magnitude.

That means the information technology cost to create a startup is easily an order of magnitude or even two orders of magnitude less than it was in 2000. Granted, IT costs are only part of the cost of creating a company and building its products.

The point is that, for a variety of reasons, app development has been democratized by cloud computing, Moore’s Law and widespread consumer use of cloud-based apps.

In a similar way, thinking about how to use mobile networks and Wi-Fi networks to support mobile services have changed, and will change more as new spectrum is released to support Internet usage, and as access methods broaden.

Broadly speaking, we have been debating for decades whether Wi-Fi can be an effective substitute for mobile network access, and under what conditions.

But Moore’s Law, new spectrum and new access methods continue to change the context. Put simply, all the trends point to denser wireless access networks, whether mobile or untethered (Wi-Fi).

Denser networks mean the value of platforms also changes. But both mobile and Wi-Fi networks are becoming more dense.

So even if denser Wi-Fi means a better ability to satisfy a wider range of connectivity needs, mobile networks also are becoming more dense, potentially obviating the value of using Wi-Fi, from an end user perspective.

Also, as it becomes easier for mobile network bandwidth to be bonded with Wi-Fi, the value of both networks will tend to rise, in tandem. In other words, better Wi-Fi means better mobile access. But it might not be the case that better mobile networks mean one-for-one increases in the value of Wi-Fi.

Higher frequencies for mobile access (millimeter wave, for example) mean mobile networks will be denser.

That very density, plus the bandwidth gains (higher frequencies carry more bits, because they have more Hertz), mean Wi-Fi is helpful, but less useful as a complete alternative access platform.

It remains unclear how network densification affects the value of Wi-Fi as a rival access platform, compared to mobile. Arguably, mobile operators will be able to monetize Wi-Fi easier than Wi-Fi can monetize mobile.



Bandwidth costs have declined by two orders of magnitude since 1999, as well. Some bandwidth costs have dropped even more. Comcast, for example, has been doubling bandwidth every 18 months.

That is as fast as Moore's Law might suggest is possible, though most might not believe it is possible in the capital-intensive and labor-intensive access networks business.

As crazy as it seems, U.S. Internet service provider Comcast, now the biggest supplier in that market, has doubled the capacity of its network every 18 months.

In other words, Comcast has  increased capacity precisely at the rate one would expect if access bandwidth operated according to Moore’s Law.

And that, also, means consumers of apps, as well as suppliers of apps, can reach more people than ever, more affordably than ever.

TRAI Rules "No Free Basics" in India

In a move that comes as no surprise, the Telecom Regulatory Authority of India has ruled that programs such as Facebook’s “Free Basics” are covered by rules related to non-discriminatory tariffs, and has banned such offers.

For that reason, TRAI has banned such programs in India. Simply put, Internet service providers cannot offer free access to packages of applications curated by the ISP.

Having done so, TRAI has framed this aspect of the network neutrality decision as a matter of common carrier tariffs, rather than as a matter of content freedom.

“No service provider shall enter into any arrangement, agreement or contract, by whatever name called, with any person, natural or legal, that has the effect of discriminatory tariffs for data services being offered or charged to the consumer on the basis of content,” the TRAI decision says.

As has been the case elsewhere, though, a distinction is made between managed services and “Internet” apps. “This regulation shall not apply to tariffs for data services over closed electronic communications networks,” the decision states.

As has been the case in other countries, Indian regulators say they have concerns about the impact of such “no charge” access to some applications as a tariff fairness issue.

In essence, the argument is that programs such as Free Basics create favored packages of content assets. In other settings, as TRAI essentially notes, that would not be an issue. TV broadcasters, radio broadcasters and others have editorial discretion where it comes to the content they wish to broadcast or publish.

In this case, TRAI essentially deems the “level playing field” more important than other values, including the obvious benefit of allowing more people of low income to use mobile Internet apps.

Lady Gaga Owns National Anthem



Totally, totally nailed it. 

Sunday, February 7, 2016

50

Thanks for the ride, Peyton

Friday, February 5, 2016

"3 or 4" for Mobile, "2 or 3" for Fixed Networks Now Are the Key Numbers

In most mobile markets, the key numbers are "three or four," representing the number of sustainable operations. In some fixed markets, the relevant numbers might now be "two or three," likewise representing the number of viable facilities-based competitors.

Technologists, service provider executives and analysts have endlessly debated the “cost” of deploying high-capacity networks of several types for a few decades. Over those decades, the cost parameters have changed.


Fiber to home and digital subscriber line hardware have gotten more affordable. Cable TV DOCSIS platforms have vastly improved. Now there are other potential options.


Internet access by balloon, unmanned aerial vehicles, fifth generation mobile networks, fixed wireless (TV white spaces, for example), Wi-Fi hotspots, municipal networks and new constellations of low earth orbit satellites are some of the reasons to argue that Internet access business models are liable to be redefined over the next decade.
On the other hand, other key business model inputs have not changed very much. The network cost (outside plant) still appears to hover around $1,500 per location, including construction and hardware.


Active elements still cost about $280 per location.


Activated drops and customer premises equipment costs remain substantial. CPE alone might represent costs of $455 or so per active customer location. Installing an actual drop for a customer can cost $300.


Taken all together, the cost of an FTTH install in a medium-sized U.S. city, for example, might be about $1780. The cost to connect a paying customer adds another $755.


So it still can take $2535 in network-related costs to serve a customer. Marketing costs have to be added on top, as well. New attackers likely can figure out ways to spend less than the $850 to $2,000 tier-one competitors typically spend (service discounts plus out of pocket costs) to acquire a new account.






But business models are even more sensitive to take rates, in turn driven in large part by the number of locations served by a competitor that cannot be dislodged.


In other words, in markets where two other competitors--both accomplished--continue to hold about 60 to 66 percent customer share, per-customer costs are substantially higher than per-location costs.


If per-location fixed cost is $2535, and take rates are 33 percent, then network cost per paying customer is $7605. After drop costs and CPE, per-customer cost (without marketing) is $8360.


If per-month revenue is $100, it takes more than seven years to recover network costs. Few, if any, private firms would undertake such an endeavor, given those obstacles.


The business model works better in markets where just one serious competitor operates (a cable operator, for example), allowing the attacker to contemplate take rates more along the lines of 50 percent. That reduces per-customer network cost to $5070, with total connection cost of $5825.


That possible variance is possible because, in some markets, competitors might be very vulnerable to a challenge. The incumbents might not be able to afford to reinvest in their own networks.


In other cases they might prefer taking a big share loss to reinvesting at the level required to blunt the attack (some companies focus on major urban markets and are willing to lose rural or low-density markets).


Higher market share (50 percent rather than 33 percent) reduces the break-even point on network investment by about two years. That still is a tough hurdle, though, with more than five years required just to recover installed network costs.


Boosting average revenue per account therefore is a key strategy for quickening the payback. If per-account revenue is $150, instead of $100, break even times shrink. At 33 percent take rates, and $150 monthly revenue per account, break even on network costs comes in less than five years.


At 50-percent take rates, and $150 monthly revenue per account, break even on the network investment can come in a bit more than three years. That is a workable business model for many firms.


One new approach, in that regard, is to strand fewer assets, building only in some neighborhoods, for example. That also lowers overall capital investment, since a smaller network is built.


A 2016 DIscus Project white paper on high speed access network business models reviewed models where more than one physical network operated, asking the key question: how many such networks are sustainable?


The conclusion, as you would expect, is that few such networks are sustainable in any given region or market: “two or three.” The important number is “three.” Under some conditions, three facilities-based competitors might be sustainable, where today the number is “two.”


Where the contestants are private firms, sustainable operations are possible in dense urban areas, for example, or where one existing incumbent cannot, or will not, upgrade to match the attack.

The point is that, as hard as the business model might yet be, the business case for new gigabit networks--even in markets where telcos and cable TV already operate--might well be improving to the point where, in some markets, three facilities-based competitors can sustain themselves.

Thursday, February 4, 2016

Gigabit Access Business Models are Ripe for Innovation

Business models for gigabit Internet access networks are in a very-fertile phase right now. 

Old assumptions about what was possible are being redefined as first Google Fiber, and now many independent ISPs and a growing number of private efforts supported by local government assets are being launched or considered.

Before the process plays out substantially, we also are likely to discover that incumbents and attackers alike have redefined business models in substantial ways. Surviving incumbents will have taken out more costs than they believed possible.

Attackers will have established that low operating costs make the difference in the business model. And, along the way, new revenue streams are likely to emerge as crucial inputs.

It might finally be possible to monetize mobility services using Wi-Fi hotspots supported by such networks, for example.

One of the crucial assumptions of a 2013 DIscus Project white paper on high speed access network business models was the evaluation of possible business models where the key variables were where and what to bundle and provide wholesale.

Another more-recent paper reviewed models where more than one physical network operated, asking the key question: how many such networks are sustainable?

The conclusion, as you would expect, is that few such networks are sustainable in any given region or market: “two or three.”

Where the contestants are private firms, sustainable operations are possible in dense urban areas.

The study reached no conclusions about public access networks built and owned by local government units.

In recent days, there are some new innovations. Among the more interesting developments are the Google Fiber “fiberhoods” approach, where no construction occurs until a minimum number of customers sign up.

The fact that regulators allow such approaches is important, allowing service providers to invest only where there is enough demand to justify new gigabit connections.

The new cable TV DOCSIS 3.1 platform also is important, as it allows delivery of gigabit services across the entire customer base, without a retrofit of the physical plant.  

For new housing developments, bundling network cost with home purchase prices in greenfield developments might be possible in some cases.

But most potential fiber-to-home connections are going to be retrofits, where that approach is not possible.  

So it is noteworthy that a growing number of smaller Internet service providers and local governments are actively exploring, or launching, gigabit access networks, generally privately operated and funded, but with governments contributing some assets.

Just how important such efforts ultimately will be cannot yet be predicted. Still, it is almost shocking that networks as extensive as those built by Google Fiber, or as small as the smaller-town networks being built by small ISPs, are suggesting the business case for gigabit networks can work, even in markets where telcos and cable TV already operate.

VoLTE Call Drop Problem is Worse Than 2G or 3G Voice

In its annual State of the RAN report, Amdocs says VoLTE can lead to call drop rates 400 percent to 500 percent higher than that of 2G and 3G voice.

With aggressive tuning of the network, operators can bring VoLTE call drops to within 20 percent of 2G and 3G calling within six months, Amdocs argues.

In other words, as so often is the case, there is good news and bad news with the deployment of the new capability.

VoLTE can lower offers operating expense, as it uses a more efficient coding scheme. And when operators can reallocate 2G or 3G bandwidth for other purposes, VoLTE can help operators keep capital investment under better control, principally by delaying the point where additional spectrum acquisitions or network redesign must be undertaken.

Perhaps in part, the problem with VoLTE call drops is indoor coverage. Some 75 percent of network traffic in cities is indoor.

That is a potential problem for networks using any frequencies, but much more troublesome for higher frequencies (2 GHz, for example, compared to 600 MHz, 700 MHz or 800 MHz).

Compared to outdoor users, indoor users face a 25 percent increase in network problems in periods of high demand, Amdocs argues.

Global Mobile Revenue Will Be Flat in 2016

Growth remains the chief challenge for mobile and fixed network telecommunications services in 2016.

Strategy Analytics, for example, predicts there will be no growth in core mobile connectivity revenue in 2016.

So we now can add mobile service revenues to the list of legacy products that now have become exhausted as drivers of revenue growth. That is one key reason why so many believe Internet of Things is so important: it represents a possible huge new growth driver.




100 Million Gigabit Internet Access Connections in Service by 2020

At least 100 million people are expected to buy gigabit  services by 2020, predicts Point Topic CEO Oliver Johnson. Between 2016 and 2020, adoption will grow at a sizzling 65 percent compound annual growth rate.

Close to 70 percent of that overall growth is expected to come from the Asia Pacific region, Johnson argues. If so, most will come from countries in Asia, rather than the Pacific Islands, one might well argue.

In many cases, that headline speed will mean even-higher adoption of access at speeds in the 100s of megabits range. A reasonable forecast would have about half of U.S. high speed access  users buying 100 Mbps connections by about 2020.




100 Million Gigabit Internet Access Connections in Service by 2020

At least 100 million people are expected to buy gigabit  services by 2020, predicts Point Topic CEO Oliver Johnson. Between 2016 and 2020, adoption will grow at a sizzling 65 percent compound annual growth rate.

Close to 70 percent of that overall growth is expected to come from the Asia Pacific region, Johnson argues. If so, most will come from countries in Asia, rather than the Pacific Islands, one might well argue.

In many cases, that headline speed will mean even-higher adoption of access at speeds in the 100s of megabits range. A reasonable forecast would have about half of U.S. high speed access  users buying 100 Mbps connections by about 2020.




"More" is the Story in Latest Cisco Mobile Visual Networking Index

“More” always is the story every time Cisco issues another edition of its Visual Networking Index, including the mobile VNI. As a financial analyst might summarize each report, the “trend is up and to the right.”

Global mobile data traffic grew 74 percent in 2015, reaching 3.7 exabytes per month at the end of 2015, up from 2.1 exabytes per month at the end of 2014.

Mobile data traffic has grown 4,000-fold over the past 10 years and almost 400-million-fold over the past 15 years.

More "what" tends to be the new part of the story.

Fourth-generation (4G) traffic exceeded third-generation (3G) traffic for the first time in 2015.

Although 4G connections represented only 14 percent of mobile connections in 2015, they already account for 47 percent of mobile data traffic, while 3G connections represented 34 percent of mobile connections and 43 percent of the traffic.

In 2015, a 4G connection generated six times more traffic on average than a non‑4G connection. That has been the clear trend since people started using smartphones on 3G and 4G networks.

In 2015, on an average, a smart device generated 14 times more traffic than on basic devices.

Mobile offload also exceeded mobile network traffic for the first time in 2015. In other words, the fixed network now directly supports the majority of mobile device data usage.

Some 51 percent of total mobile data traffic was offloaded onto the fixed network through Wi-Fi or femtocell in 2015.

As now is the clear case, video traffic now drives data bandwidth consumption, accounting for 55 percent of total mobile data traffic in 2015. Mobile video traffic now accounts for more than half of all mobile data traffic.

The fundamental projections also are a story of “more.” Global mobile data traffic will increase nearly 800 percent between 2015 and 2020, growing at a compound annual growth rate (CAGR) of 53 percent from 2015 to 2020.

Mobile network connection speeds will increase more than threefold by 2020. Where the average mobile network connection speed was 2 Mbps in 2015 average speeds will reach nearly 6.5 Mbps by 2020.

By 2020, 4G will represent nearly 41 percent of connections, but 72 percent of total traffic.

By 2020, a 4G connection will generate 3.3 times more traffic on average than a non-4G connection.

Globally, 67 percent of mobile devices will be smart devices by 2020, up from 36 percent in 2015.

The vast majority of mobile data traffic (98 percent) will originate from these smart devices by 2020, up from 89 percent in 2015. Also, 75 percent of the world’s mobile data traffic will be video by 2020.

Mobile video will increase 11-fold between 2015 and 2020.

The average smartphone will generate 4.4 GB of traffic per month by 2020, nearly a fivefold increase over the 2015 average of 929 MB per month.

By 2020, aggregate smartphone traffic will be 8.8 times greater than it is today, with a CAGR of 54 percent.

The Middle East and Africa will have the strongest mobile data traffic growth of any region with a 71‑percent CAGR. This region will be followed by Asia Pacific at 54 percent and Central and Eastern Europe at 52 percent.

Global Mobile Devices (Excluding M2M) by 2G, 3G, and 4G


Wednesday, February 3, 2016

AT&T Launches Gigabit Service in Parts of Chicago, Dallas, Atlanta and Miami

AT&T has launched gigabit per second Internet access in parts of Chicago, Dallas, Atlanta and Miami.

In Chicago, service is available in Pingree Grove, Plano, Sugar Grove and surrounding communities In Dallas gigabit service is available in Cedar Hill, Colleyville, DeSoto, Joshua, Keene, Keller, Lakeside, Little Elm, Roanoke and surrounding communities.

In Atlanta service is available in Buford, Jonesboro, Lawrenceville, Norcross, Roswell, Sugar Hill, Suwanee and surrounding communities.

In Miami, the new service is sold in  Cooper City, Miami Lakes, Miramar, Pembroke Pines, Plantation and surrounding communities.

"Value" is Issue for Viable Service Provider Martketplaces

App stores have become a major and increasingly-important channel for consumer and some enterprise apps. A related approach is the inclusion of third-party apps as part of cloud -based services such as CenturyLink’s Marketplace Provider Program or the IBM Cloud Marketplace

For example, Clusterpoint, a database vendor providing Database-as-a-Service (DBaaS) for enterprises and application developers, today announced its certification under the CenturyLink Cloud Marketplace Provider Program. This integration allows customers of the CenturyLink Cloud platform to launch the Clusterpoint 4.0 Computing Engine directly from the CenturyLink Cloud Knowledge Base.

Still, collaboration between app providers and service providers remains non-intuitive or easy.

Value is among the difficulties. Successful marketplaces arguably offer clear value for enterprises or consumer buyers, the app providers and service providers that benefit from greater customer retention and upsell opportunities.

But it often is difficult to identify the clear upside for each partner, and how much value buyers will perceive.

The upshot is that t is not often totally obvious how that alignment can be created. For starters, the way apps now are developed often means that no “permission” is required for any buyer to get access to any Internet app.

To the extent there is a viable and proven revenue model, it is the 70-30 revenue split between developers and app store owners, for popular app stores.

A half decade ago it might have seemed possible to create mobile service provider app stores, but that has proven unworkable, as the popular app stores are directly controlled either by device suppliers or in some cases operating system suppliers such as Google.  

And that remains one of the key questions about collaboration between app providers, device providers and service providers. In principle, getting approved for any major app store solves the “distribution” problem, if not the “popularity” problem.

What always is tougher are ways for app and service providers, specifically, to collaborate in ways that are mutually beneficial. In a somewhat more limited way, a cloud provider’s partner programs are modeled on the app store model, and provide the same sorts of benefits.

For service providers, the issue is rather important. Over time, unless some obvious and integral role in the apps ecosystem is created, access providers will be “dumb pipes” in a commodity business.

"Free Speech" Versus the "Free Exercise of Religion?" Maybe "Free Exercise" Versus Criminal Trespass

Some commentators loudly proclaim the January 18, 2026, disruption of a church service at Cities Church in St. Paul, Minnesota is a “test of...