Monday, February 12, 2018

Jio is Succeeding at "Destroying" the India Mobile Market

By now, telecom executives are well aware of the “disruption” market strategy, whereby new entrants do not so much try and “take market share” as they attempt to literally destroy existing markets and recreate them.

Skype and VoIP provider one example. The “Free” services run by Illiad provide other examples. Most recently, we have seen Reliance Jio disrupting the economics of the mobile market in India, offering free voice in a market where voice drives service provider revenues.

“Free” is a difficult price point in most markets. But free voice forever is among the pricing and packaging foundations for Reliance Jio’s fierce attack on India’s mobile market structure. “Free voice” does not only lead to Jio taking market share, but reshapes the market, destroying the foundation of its competitor business models.

At the same time, Jio hopes to become the leader in the new market, driven by mobile data, with far-higher usage and subscribership, and vastly-lower prices.

Disruption really is the strategy, not “take market share.”

RJio says it will reduce tariffs by another 20 percent whenever a rival matches its offer. You might guess at what will happen. Competitors will drop tariffs, but not to match Jio’s offers exactly, as that will simply trigger another deep discount in retail pricing that depresses revenue for virtually all service providers.

At the end of 2016, Reliance Jio did not appear as a market share leader. Some seven months later, Reliance Jio had taken about nine percent share, and kept growing. By the second quarter of 2018, Reliance Jio might have 14 percent revenue market share.  

Where Jio really dominates is in mobile data, where Jio represents about 94 percent of mobile data usage and 34 percent of mobile data accounts.  

Saturday, February 10, 2018

Convergence Means Vertical Integration; Horizontal Role Expansion

Formerly-separate industries--media, telecom and computing--have been converging for the better part of three decades. The industries have not yet completely fused, to be sure. But no longer is it possible to clearly delineate the boundaries.

Google and Facebook are huge ad-supported technology companies (hardware and software; content and access, in Google’s case). Comcast is a content creator, packager and distributor; so is Netflix. And so AT&T wants to be, also.


Are Netflix, Google and Facebook in the same business as Comcast, AT&T and Verizon? The answer actually matters for government policymakers and regulators, as well as firms and customers.

AT&T Chairman and CEO Randall Stephenson argues that its acquisition of Time Warner, a classic vertical merger, removes no competitors from any of the relevant markets and only positions AT&T the same way as other key competitors such as Netflix and Amazon.

"Reality is, the biggest distributor of content out there is totally vertically integrated,” said Stephenson, referring to Netflix. “They create original content; they aggregate original content; and they distribute original content.”

That is true for other major platforms as well, including Amazon, Google, YouTube and Hulu, he argues.

The issue is that roughly half of internet ecosystem profits now are earned by app providers, content creators and aggregators, device suppliers and others, while perhaps half are earned by access providers.

In the recent past as much as 60 percent of ecosystem profits were earned by access providers. So it is easy to describe the migration of value: it is moving towards apps, content and devices, and away from access.

That, in a nutshell, is why tier-one service providers who hope to survive must do as Comcast has done: transition from access to multi-role firms with assets in content creation and apps.

Friday, February 9, 2018

The Next Big Switch

A new “big switch” is coming for the communications industry. Back in the 1980s, much was made of the idea that former over-the-air (wireless) broadband traffic (TV) was moving to the fixed network (cable TV networks) while narrowband traffic (voice) was moving to the mobile network. That was dubbed the  Negroponte switch.

In more recent years, we have seen something different, namely the shift of all media types to wireless access, starting with voice, continuing with messaging and web surfing, and now video. Mobile bandwidth improvements are part of the explanation. But offload to Wi-Fi also drives the trend.

In the coming 5G era, that trend is going to accelerate, with the role of untethered and mobile networks continuing to grow.

The other notable change is that the distinctions between narrowband or wideband and broadband also have shifted. As we continually revise upwards the minimum speeds dubbed “broadband,” more and more use cases and traffic are actually no longer broadband in character.

When a high-definition streaming session only requires 4 Mbps to 5 Mbps, while broadband is defined as 25 Mbps, HDTV has become a wideband--not broadband--application.  

Consider the performance characteristics of most networks. Wide area networks mostly will support applications requiring 10 Mbps or less. That is, by definition, no longer “broadband” use cases.

That also will be the case for most untethered networks as well.

In other words, most apps driving revenue (direct subscriptions) or value (indirect revenue streams) will operate in either narrowband or wideband speed ranges.

We might be in a “gigabit” era, but most apps will generate value and revenue in narrowband or wideband use cases.

"Back" to the Narrowband Future

“Voice” has been steadily losing value  as a revenue driver for communications service providers.

Still, voice will matter more in the future, but not “people talking to people.” Voice will be a key interface for interacting with computing resources. In many ways, that is an analogy for what will be happening in other areas of communications as well.

To be specific, even if industry revenue in recent decades has shifted from narrowband to broadband, in the next era revenue growth is likely to shift substantially back to narrowband.

That might seem crazy. It is most eminently realistic. Keep in mind that our old definitions of narrowband, wideband and broadband have evolved. Traditionally, narrowband was any access circuit operating at about 64 kbps. Wideband was any circuit operating between 64 kbps and 1.5 Mbps.

Broadband was anything faster than 1.5 Mbps.

These days, almost nothing matters but the definition of “broadband,” now defined by the U.S. Federal Communications as a minimum of 25 Mbps in the downstream. Using that definition, and recognizing the floor will keep rising, most of the coming new use cases will be “less than broadband, and many will be classically narrowband.

Consider the mix of existing and “future” applications. With the salient exceptions of 360-degree video, virtual reality and augmented reality and some autonomous vehicle use cases, virtually all other applications useful for humans (using smartphones, watches, health monitors, PCs, tablets) or sensors and computers (internet of things) require bandwidth less than about 25 Mbps to 50 Mbps.


The somewhat obvious conclusion is that most of the use cases and potential revenue will be driven by use cases that do not require  “broadband” access speeds. Once upon a time, video was the classic “broadband” use case. These days, even high-definition video requires a few megabits per second support.

In that sense, using the 25-Mbps floor for defining “broadband,” even video has become a wideband app, requiring about 4 Mbps to 5 Mbps.

The big switch is coming.

Wednesday, February 7, 2018

Intel Launches Xeon D-2100 Processor for Edge Computing: IoT Implications

Intel has introduced its new Intel Xeon® D-2100 processor, a system-on-chip processor supporting edge applications, data center or network applications constrained by space and power.

The new processors will help communications service providers offer multi-access edge computing (MEC).

MEC is viewed by many as a way service providers can create a new role in cloud computing, focused at the edge, and supporting local content and real-time information about local-access network conditions.

MEC, in turn, is viewed as a strategic growth area precisely because it supports many internet of things apps. In other words, IoT might well hinge on edge computing.


And many of those apps are expected to arise in the smart transportation or smart cities areas, as well as healthcare and manufacturing, with 60-percent compound annual growth rates.

Notably, the biggest current market for edge computing is North America, the highest growth rates for edge computing are expected to occur in Asia.

That would also be a platform for third party customers who want to reduce both latency across core networks and WAN operating costs as well.

Lead use cases range from connected cars to smart stadiums, retail and medical solutions, where either low latency or high amounts of fresh content need to be supplied, and where traversing core networks is either expensive or latency-inducing.

The Intel Xeon D-2100 processors include up to 18 “Skylake-server” generation Intel Xeon processor cores and integrated Intel® QuickAssist Technology with up to 100 Gbps of built-in cryptography, decryption and encryption acceleration.

The new system on a chip also can play a role in enhancing performance, drawing less power in virtual customer premise equipment used to support virtual private networks.

To be sure, there are other key applications, including storage networks, content delivery networks and enterprise networks.

The new processor also are related to another set of key trends. The entire communications industry now arguably is a tail on an internet dog, and an integral building block of modern computing, which is cloud based at the moment, and increasingly is viewed as transitioning to edge computing (also called fog computing by many).

Decentralized computing, even more than cloud computing, shapes and requires communications facilities.


Too Bad Net Neutrality is a Bumper Sticker

There are a number of issues raised by the New Jersey executive order banning contracts, after July 1, 2018,  with communications providers that violate New Jersey’s definition of network neutrality.

Tragically, net neutrality remains a concept that is reduced to a bumper sticker. It is horrifically more complex than that.

As with the national argument over network neutrality, the provisions combine a whole lot of operating practices everybody agrees with, and which internet service providers agree must be followed, including every provision relating to “freedom to use lawful apps.”

Look at the clauses structurally.  In such an order, the only enforceable actions include the phrases “shall” or “shall not.” Everything else is preamble or not strictly enforceable. And, as you would guess, all the clauses relating to “shall” or “shall not” apply to ISPs, not app or content providers.

Logically enough, you might argue, for an order relating to procurement of communications services from firms that operate as ISPs.

At a high level, though, if the genuine intent is to prevent anti-competitive pressures in the internet ecosystem, an argument can be made that such orders are incomplete at best, harmful at worst, and almost always based on questionable, overly-broad premises.

Consider the argument that the core of the net neutrality debate genuinely is about internet freedom. Ignore for the moment the issue of “whose” freedom supposedly or authentically  is enhanced.

There is almost no disagreement--in any quarter--about consumer right to use all lawful apps, without deliberate and unfair application of packet prioritization based on the ownership or category of app. In other words, voice over Internet cannot be blocked or degraded when other apps are not impeded.

Nor can an ISPs own VoIP services be guaranteed better network performance when competitors are not allowed to use those same enhancements. OTT video services owned by rivals likewise cannot operate at “best effort only” when an ISP-owned OTT service uses packet prioritization.

Note, though, that the order only says “paid” prioritization shall not be allowed. It is not clear whether unpaid prioritization of internet traffic is allowable, so long as all internet packets receive such treatment.

There are obviously some practical reasons for such language. Prioritization might not always be a bad thing. Sometimes--and that is why many favor some application-specific forms of packet prioritization--prioritization leads to better app performance and user experience.

That is why content delivery networks are used by major app providers. But the specific language is intended to ensure that content and app providers do not have to pay for any such prioritization. Whether it is a good thing or not is quite another question.

Of the specific clauses, everybody, literally everybody, agrees that consumers must have access to all lawful content, applications, services. They have the right to use non-harmful devices, subject to reasonable network management that is disclosed to the consumer. That never has been what the argument has been about, all rhetoric about freedom notwithstanding.

Instead, the argument always has been about business practices that are hard to describe or implement, in a technical sense. ISPs are prohibited from policies that:
  • “Throttle, impair or degrade lawful Internet traffic based on Internet content, application, or service”
  • Engage in paid prioritization.

Among the problems here is that the difference between allowable network management and impermissible “throttling” is nearly impossible to clearly delineate. Some might even say measurable neutrality rules for the internet are impossible.

Even if you think it is possible to determine that a network neutrality infraction has occurred (some think it cannot even be measured), others believe a generalized  “fast lane” regime is not possible, even if ISPs wanted to implement one.

“These predictions intentionally ignore technical, business, and legal realities, however, that make such fees unlikely, if not impossible,” Larry Downes, Accenture Research senior fellow, has argued. “For one thing, in the last two decades, during which no net neutrality rules were in place, ISPs have never found a business case for squeezing the open internet.”

Some of the confusion comes from the use of IP for both the internet and other managed services, private networks and business services and networks, which often engage in packet prioritization for clearly-understood reasons.

Some of the confusion comes from the notion that differentiating tiers of service--just like consumers are offered speed tiers or usage tiers--necessarily impairs the consumer’s use of the internet.

Perhaps the reality of packet discrimination in the core network, used by virtually all major app providers, and its obvious similar value in the access network, are all too real for app providers.

App providers use content delivery networks routinely to improve end user experience. It does take money to create and operate a CDN. That raises the cost of doing business.

So the fear that end users might also be provided CDN services, all the way to the user endpoints, could have cost implications. It is not clear how this hypothetical system would work.

Would all internet access move to CDN delivery, or just most of it? Cisco analysts, for example,  believe the core internet network are, in fact, moving to use of CDN. So there might still be some apps that operate “best effort” through the core of the network. But Cisco believes most traffic would used prioritization, and much of that “paid.”

Precisely “who” pays can vary. An app provider can pay a third party, or an app provider can build and operate its own CDN. Either way, the costs get paid for (by end users, in all cases).

And other network developments, such as network slicing, have value precisely because some network features can be specifically selected when creating a virtual network.

The point is that there is an increasingly-obvious move to packet prioritization mechanisms for both private and public networks, managed services and “internet” apps, in large part because latency increasingly matters for many important apps, and packet prioritization is fundamentally how we reduce latency problems.

How we prioritize can take many forms, but storing content closer to the edge of the network is one way of prioritizing. Introducing class of service mechanisms is another way of achieving the same goal.

“In broadband, it’s the content providers who have leverage over the ISPs and not the other way around, as Netflix recently acknowledged in brushing aside concern about any “weakening” of net neutrality rules,” said Downes.

The unstated principle is that consumers somehow are harmed, or would be harmed, by any such practices. Ignore for the moment that anti-competitive practices are reviewable both by the Department of Justice and Federal Trade Commission.

In other words, any anti-competitive behavior of the type feared, such as an ISP making its own services work better than third party services, is reviewable by the FTC primarily, and also by the DoJ.

As a practical matter, any such behavior would provoke an immediate consumer and agency reaction.

The other key concern is “paid prioritization.” Let us be clear. The fear, on the part of some app and content providers, is that this would raise their costs of doing business, as they perceive there actually are technical ways to improve user experience by prioritizing packets.

App and content providers themselves already use such techniques for their own traffic. They know it costs money, and the fear is that they might have to pay such costs, if applied on access networks as well as the core networks where app providers themselves already spend to prioritize packets.

The rules also bar ISPs from (preventing) use of a non-harmful device, subject to reasonable network management that is disclosed to the consumer. It is not clear to me that there ever has been any instance where this issue has even risen, but obviously some device suppliers want the clause added.

Every bit of communications regulation, as with every action by public officials, has private interest implications. Some actors, firms or industries gain advantage; others lose advantage, every time a rule is passed.

Nor is it unlawful for any industry to try and shape public opinion in ways that advance its own interests. It happens every day.

But net neutrality is horrifically complicated. It’s reduction to a bumper sticker slogan is unfortunate. You cannot really solve such a complex problem (it is an issue of business advantage and practices) by ignoring its complexity. “Freedom” really is not the issue.

Business practices and perceived business advantage are the heart of the matter.

Tuesday, February 6, 2018

5G Capex Might be Half that of 4G

It might be apparent to all--from regulators to providers to infrastructure and applications providers--that global telecom business models are under duress. That is to say, any honest analysis would conclude that the core business revenue model is under extreme pressure.

So we should not be surprised to find that the telecom supplier base also is under duress. One only has to note the evisceration of the North American core infrastructure industry (Nortel bankrupt, Lucent acquired by Alcatel, acquired in turn by Nokia; the emergence of Huawei as the leading global supplier; financial struggles at Ericsson and Nokia as examples.

At the same time, open source solutions, white box and alternative platforms are emerging, in part because service providers know they must dramatically retool their cost structures, from platform to operations, even as they seek new sources of revenue and roles in the ecosystem.

So 5G “will not be a capex windfall for the vendors,” says Caroline Gabriel, Rethink Technology Research research director. “Operators will prioritize coexistence with 4G and architecture to prolong the life of existing investments.

“There will be heavier reliance on outsourcing and on open platforms to reduce cost and transfer cost further than ever from capex to opex,” Gabriel argues.

In fact, Gabriel believes mobile operators will try to spend as little as half the capex on 5G roll-out that they did on 4G.

That is one answer to the question some raise: whether mobile operators can even afford to build 5G networks.

And many of the network enhancements--virtualized RAN and hyper-densification and massive MIMO antenna arrays--also will be applied to 4G, extending the useful life of advanced 4G.

That might be important in allowing 5G to focus on support for new categories of use cases, apps and revenue models.

Matters are largely the same in the fixed network realm. As a whole, U.S. fixed line service providers have had a tough time generating any net revenue growth since 2008, according to MoffettNathanson figures.


Much the same trend can be seen in Europe’s fixed network markets as well, as there has been zero to negative subscription growth since 1999. And the trend is happening even in the more-developed of South Asia countries, as fixed access declines (voice losses balanced by fixed internet access) and mobile grows.  


That means, if anything, that cost pressures will be even heavier in the fixed network segment of the business, as any incremental capex or opex has to support an arguably-dwindling revenue potential.

More Computation, Not Data Center Energy Consumption is the Real Issue

Many observers raise key concerns about power consumption of data centers in the era of artificial intelligence.  According to a study by t...