Monday, February 12, 2018

Cable Dominates U.S. Internet Access

It is hard to overestimate the impact cable TV companies have had in the U.S. internet access business. At every speed tier, cable operators have overwhelming market share, as you immediately can tell from the following graph, where “red” represents cable market share at various speeds. At the higher end, cable has well over 80 percent of sold connections operating at a minimum of 100 Mbps.

But even at the low end, at speeds less than 3 Mbps, cable has 60 percent share.

For better or worse, those adoption statistics reflect deliberate capital investment decisions by U.S. telcos other than Verizon. Basically, many independent telcos have been financially unable to invest faster, while AT&T arguably has made a priority of mobile capital investment.

Verizon “benefits” from having made most of its fiber to premises investments a decade ago.

What comes next is not so clear. Some argue cable operators are in position to leverage their market control to raise prices.

Others might argue that telco fixed wireless is part of the strategy for catching up with cable. And AT&T recently has stepped up investment in gigabit services where it believes their is a business model.

Others might argue that at least AT&T and Verizon are looking past fixed segment competition and hoping to lead in the future mobile and untethered markets. That is part of the interest in both mobile video and fixed wireless built on the mobile network.

Yet others might argue that  apps and services beyond internet access will drive revenue in a decade, not access, per se, reducing the value of access investments, in any case.

Still, the market impact cable TV has had is foundational. Its reliance on its own facilities, using different platforms than telcos use, has allowed it to upgrade faster, at lower cost, than telcos have been able to do using fiber to the home.

But most of the share losses have come from telcos other than AT&T and Verizon, which in recent years have been holding their own. In other words, you might argue both those firms have invested enough to get by, while prioritizing investments in mobility that clearly has driven revenue growth for both firms.

The issue now is how much investment has to be tweaked to maintain the “just enough” capex. Many would argue it does not make sense for either AT&T or Verizon to overinvest in fixed network internet access, as the financial return simply will not be there.


At this point, the fixed network internet access market is nearly a zero-sum game. That means most of the market share gains will have to come from other competitors. Such markets are tough.

The total number of U.S. internet connections increased by about six percent between December 2015 and December 2016, reaching 376 million, the Federal Communications Commission reports. So there was growth, but at low and slowing rates.


With the caveat that speeds offered by some providers seems to be increasing about 50 percent per year,  the 106 million fixed connections at the end of 2016 included 37 percent (39 million connections) operating between 25 Mbps and 100 Mbps, while 23 percent (or 25 million connections) offered speeds of at least 100 Mbps.

That suggests the policy of gradually investment by AT&T, for example, is largely defensive. That makes sense, one might argue, in the context of a business driven by mobility or video services, not internet access.

In the fourth quarter of 2017, for example, AT&T earned 85 percent of revenue from apps, not internet access.

Keep in mind that the FCC  figures also represent consumer buying choices, not the services available to buy. Consumers make rational choices about internet access, buying services that provide the best perceived value proposition, not necessarily always the “fastest available” tiers of service.

They buy what is “good enough,” more often than they buy what they deem “best.”

So 62 percent of consumers chose to buy services running faster than 25 Mbps.

Some 38 percent of consumers bought services operating at slower speeds, though it is impossible to say for sure whether those customers were unable to buy faster services, or chose to buy slower services even when faster alternatives were available.

Some four percent of consumers (four million connections) bought services operating slower than 3 Mbps downstream.

Some 14 percent bought  (15 million connections) services operating between 3 Mbps and 10 Mbps. Some 22 percent (23 million connections) purchased services running between 10 Mbps and 25 Mbps.

The median (half of connections faster, half slower) downstream speed of all reported fixed connections was 40 Mbps and the median upstream speed was 5 Mbps.

For residential fixed connections, the median downstream speed was 50 Mbps and the median upstream speed was 5 Mbps.

Most of the growth in total Internet connections is attributable to increased mobile Internet access subscribership. The number of mobile Internet connections increased 7% year-over-year to 270 million in December 2016, while the number of fixed connections grew to 106 million – up about 3% from December 2015.

Jio is Succeeding at "Destroying" the India Mobile Market

By now, telecom executives are well aware of the “disruption” market strategy, whereby new entrants do not so much try and “take market share” as they attempt to literally destroy existing markets and recreate them.

Skype and VoIP provider one example. The “Free” services run by Illiad provide other examples. Most recently, we have seen Reliance Jio disrupting the economics of the mobile market in India, offering free voice in a market where voice drives service provider revenues.

“Free” is a difficult price point in most markets. But free voice forever is among the pricing and packaging foundations for Reliance Jio’s fierce attack on India’s mobile market structure. “Free voice” does not only lead to Jio taking market share, but reshapes the market, destroying the foundation of its competitor business models.

At the same time, Jio hopes to become the leader in the new market, driven by mobile data, with far-higher usage and subscribership, and vastly-lower prices.

Disruption really is the strategy, not “take market share.”

RJio says it will reduce tariffs by another 20 percent whenever a rival matches its offer. You might guess at what will happen. Competitors will drop tariffs, but not to match Jio’s offers exactly, as that will simply trigger another deep discount in retail pricing that depresses revenue for virtually all service providers.

At the end of 2016, Reliance Jio did not appear as a market share leader. Some seven months later, Reliance Jio had taken about nine percent share, and kept growing. By the second quarter of 2018, Reliance Jio might have 14 percent revenue market share.  

Where Jio really dominates is in mobile data, where Jio represents about 94 percent of mobile data usage and 34 percent of mobile data accounts.  

Saturday, February 10, 2018

Convergence Means Vertical Integration; Horizontal Role Expansion

Formerly-separate industries--media, telecom and computing--have been converging for the better part of three decades. The industries have not yet completely fused, to be sure. But no longer is it possible to clearly delineate the boundaries.

Google and Facebook are huge ad-supported technology companies (hardware and software; content and access, in Google’s case). Comcast is a content creator, packager and distributor; so is Netflix. And so AT&T wants to be, also.


Are Netflix, Google and Facebook in the same business as Comcast, AT&T and Verizon? The answer actually matters for government policymakers and regulators, as well as firms and customers.

AT&T Chairman and CEO Randall Stephenson argues that its acquisition of Time Warner, a classic vertical merger, removes no competitors from any of the relevant markets and only positions AT&T the same way as other key competitors such as Netflix and Amazon.

"Reality is, the biggest distributor of content out there is totally vertically integrated,” said Stephenson, referring to Netflix. “They create original content; they aggregate original content; and they distribute original content.”

That is true for other major platforms as well, including Amazon, Google, YouTube and Hulu, he argues.

The issue is that roughly half of internet ecosystem profits now are earned by app providers, content creators and aggregators, device suppliers and others, while perhaps half are earned by access providers.

In the recent past as much as 60 percent of ecosystem profits were earned by access providers. So it is easy to describe the migration of value: it is moving towards apps, content and devices, and away from access.

That, in a nutshell, is why tier-one service providers who hope to survive must do as Comcast has done: transition from access to multi-role firms with assets in content creation and apps.

Friday, February 9, 2018

The Next Big Switch

A new “big switch” is coming for the communications industry. Back in the 1980s, much was made of the idea that former over-the-air (wireless) broadband traffic (TV) was moving to the fixed network (cable TV networks) while narrowband traffic (voice) was moving to the mobile network. That was dubbed the  Negroponte switch.

In more recent years, we have seen something different, namely the shift of all media types to wireless access, starting with voice, continuing with messaging and web surfing, and now video. Mobile bandwidth improvements are part of the explanation. But offload to Wi-Fi also drives the trend.

In the coming 5G era, that trend is going to accelerate, with the role of untethered and mobile networks continuing to grow.

The other notable change is that the distinctions between narrowband or wideband and broadband also have shifted. As we continually revise upwards the minimum speeds dubbed “broadband,” more and more use cases and traffic are actually no longer broadband in character.

When a high-definition streaming session only requires 4 Mbps to 5 Mbps, while broadband is defined as 25 Mbps, HDTV has become a wideband--not broadband--application.  

Consider the performance characteristics of most networks. Wide area networks mostly will support applications requiring 10 Mbps or less. That is, by definition, no longer “broadband” use cases.

That also will be the case for most untethered networks as well.

In other words, most apps driving revenue (direct subscriptions) or value (indirect revenue streams) will operate in either narrowband or wideband speed ranges.

We might be in a “gigabit” era, but most apps will generate value and revenue in narrowband or wideband use cases.

"Back" to the Narrowband Future

“Voice” has been steadily losing value  as a revenue driver for communications service providers.

Still, voice will matter more in the future, but not “people talking to people.” Voice will be a key interface for interacting with computing resources. In many ways, that is an analogy for what will be happening in other areas of communications as well.

To be specific, even if industry revenue in recent decades has shifted from narrowband to broadband, in the next era revenue growth is likely to shift substantially back to narrowband.

That might seem crazy. It is most eminently realistic. Keep in mind that our old definitions of narrowband, wideband and broadband have evolved. Traditionally, narrowband was any access circuit operating at about 64 kbps. Wideband was any circuit operating between 64 kbps and 1.5 Mbps.

Broadband was anything faster than 1.5 Mbps.

These days, almost nothing matters but the definition of “broadband,” now defined by the U.S. Federal Communications as a minimum of 25 Mbps in the downstream. Using that definition, and recognizing the floor will keep rising, most of the coming new use cases will be “less than broadband, and many will be classically narrowband.

Consider the mix of existing and “future” applications. With the salient exceptions of 360-degree video, virtual reality and augmented reality and some autonomous vehicle use cases, virtually all other applications useful for humans (using smartphones, watches, health monitors, PCs, tablets) or sensors and computers (internet of things) require bandwidth less than about 25 Mbps to 50 Mbps.


The somewhat obvious conclusion is that most of the use cases and potential revenue will be driven by use cases that do not require  “broadband” access speeds. Once upon a time, video was the classic “broadband” use case. These days, even high-definition video requires a few megabits per second support.

In that sense, using the 25-Mbps floor for defining “broadband,” even video has become a wideband app, requiring about 4 Mbps to 5 Mbps.

The big switch is coming.

Wednesday, February 7, 2018

Intel Launches Xeon D-2100 Processor for Edge Computing: IoT Implications

Intel has introduced its new Intel Xeon® D-2100 processor, a system-on-chip processor supporting edge applications, data center or network applications constrained by space and power.

The new processors will help communications service providers offer multi-access edge computing (MEC).

MEC is viewed by many as a way service providers can create a new role in cloud computing, focused at the edge, and supporting local content and real-time information about local-access network conditions.

MEC, in turn, is viewed as a strategic growth area precisely because it supports many internet of things apps. In other words, IoT might well hinge on edge computing.


And many of those apps are expected to arise in the smart transportation or smart cities areas, as well as healthcare and manufacturing, with 60-percent compound annual growth rates.

Notably, the biggest current market for edge computing is North America, the highest growth rates for edge computing are expected to occur in Asia.

That would also be a platform for third party customers who want to reduce both latency across core networks and WAN operating costs as well.

Lead use cases range from connected cars to smart stadiums, retail and medical solutions, where either low latency or high amounts of fresh content need to be supplied, and where traversing core networks is either expensive or latency-inducing.

The Intel Xeon D-2100 processors include up to 18 “Skylake-server” generation Intel Xeon processor cores and integrated Intel® QuickAssist Technology with up to 100 Gbps of built-in cryptography, decryption and encryption acceleration.

The new system on a chip also can play a role in enhancing performance, drawing less power in virtual customer premise equipment used to support virtual private networks.

To be sure, there are other key applications, including storage networks, content delivery networks and enterprise networks.

The new processor also are related to another set of key trends. The entire communications industry now arguably is a tail on an internet dog, and an integral building block of modern computing, which is cloud based at the moment, and increasingly is viewed as transitioning to edge computing (also called fog computing by many).

Decentralized computing, even more than cloud computing, shapes and requires communications facilities.


Too Bad Net Neutrality is a Bumper Sticker

There are a number of issues raised by the New Jersey executive order banning contracts, after July 1, 2018,  with communications providers that violate New Jersey’s definition of network neutrality.

Tragically, net neutrality remains a concept that is reduced to a bumper sticker. It is horrifically more complex than that.

As with the national argument over network neutrality, the provisions combine a whole lot of operating practices everybody agrees with, and which internet service providers agree must be followed, including every provision relating to “freedom to use lawful apps.”

Look at the clauses structurally.  In such an order, the only enforceable actions include the phrases “shall” or “shall not.” Everything else is preamble or not strictly enforceable. And, as you would guess, all the clauses relating to “shall” or “shall not” apply to ISPs, not app or content providers.

Logically enough, you might argue, for an order relating to procurement of communications services from firms that operate as ISPs.

At a high level, though, if the genuine intent is to prevent anti-competitive pressures in the internet ecosystem, an argument can be made that such orders are incomplete at best, harmful at worst, and almost always based on questionable, overly-broad premises.

Consider the argument that the core of the net neutrality debate genuinely is about internet freedom. Ignore for the moment the issue of “whose” freedom supposedly or authentically  is enhanced.

There is almost no disagreement--in any quarter--about consumer right to use all lawful apps, without deliberate and unfair application of packet prioritization based on the ownership or category of app. In other words, voice over Internet cannot be blocked or degraded when other apps are not impeded.

Nor can an ISPs own VoIP services be guaranteed better network performance when competitors are not allowed to use those same enhancements. OTT video services owned by rivals likewise cannot operate at “best effort only” when an ISP-owned OTT service uses packet prioritization.

Note, though, that the order only says “paid” prioritization shall not be allowed. It is not clear whether unpaid prioritization of internet traffic is allowable, so long as all internet packets receive such treatment.

There are obviously some practical reasons for such language. Prioritization might not always be a bad thing. Sometimes--and that is why many favor some application-specific forms of packet prioritization--prioritization leads to better app performance and user experience.

That is why content delivery networks are used by major app providers. But the specific language is intended to ensure that content and app providers do not have to pay for any such prioritization. Whether it is a good thing or not is quite another question.

Of the specific clauses, everybody, literally everybody, agrees that consumers must have access to all lawful content, applications, services. They have the right to use non-harmful devices, subject to reasonable network management that is disclosed to the consumer. That never has been what the argument has been about, all rhetoric about freedom notwithstanding.

Instead, the argument always has been about business practices that are hard to describe or implement, in a technical sense. ISPs are prohibited from policies that:
  • “Throttle, impair or degrade lawful Internet traffic based on Internet content, application, or service”
  • Engage in paid prioritization.

Among the problems here is that the difference between allowable network management and impermissible “throttling” is nearly impossible to clearly delineate. Some might even say measurable neutrality rules for the internet are impossible.

Even if you think it is possible to determine that a network neutrality infraction has occurred (some think it cannot even be measured), others believe a generalized  “fast lane” regime is not possible, even if ISPs wanted to implement one.

“These predictions intentionally ignore technical, business, and legal realities, however, that make such fees unlikely, if not impossible,” Larry Downes, Accenture Research senior fellow, has argued. “For one thing, in the last two decades, during which no net neutrality rules were in place, ISPs have never found a business case for squeezing the open internet.”

Some of the confusion comes from the use of IP for both the internet and other managed services, private networks and business services and networks, which often engage in packet prioritization for clearly-understood reasons.

Some of the confusion comes from the notion that differentiating tiers of service--just like consumers are offered speed tiers or usage tiers--necessarily impairs the consumer’s use of the internet.

Perhaps the reality of packet discrimination in the core network, used by virtually all major app providers, and its obvious similar value in the access network, are all too real for app providers.

App providers use content delivery networks routinely to improve end user experience. It does take money to create and operate a CDN. That raises the cost of doing business.

So the fear that end users might also be provided CDN services, all the way to the user endpoints, could have cost implications. It is not clear how this hypothetical system would work.

Would all internet access move to CDN delivery, or just most of it? Cisco analysts, for example,  believe the core internet network are, in fact, moving to use of CDN. So there might still be some apps that operate “best effort” through the core of the network. But Cisco believes most traffic would used prioritization, and much of that “paid.”

Precisely “who” pays can vary. An app provider can pay a third party, or an app provider can build and operate its own CDN. Either way, the costs get paid for (by end users, in all cases).

And other network developments, such as network slicing, have value precisely because some network features can be specifically selected when creating a virtual network.

The point is that there is an increasingly-obvious move to packet prioritization mechanisms for both private and public networks, managed services and “internet” apps, in large part because latency increasingly matters for many important apps, and packet prioritization is fundamentally how we reduce latency problems.

How we prioritize can take many forms, but storing content closer to the edge of the network is one way of prioritizing. Introducing class of service mechanisms is another way of achieving the same goal.

“In broadband, it’s the content providers who have leverage over the ISPs and not the other way around, as Netflix recently acknowledged in brushing aside concern about any “weakening” of net neutrality rules,” said Downes.

The unstated principle is that consumers somehow are harmed, or would be harmed, by any such practices. Ignore for the moment that anti-competitive practices are reviewable both by the Department of Justice and Federal Trade Commission.

In other words, any anti-competitive behavior of the type feared, such as an ISP making its own services work better than third party services, is reviewable by the FTC primarily, and also by the DoJ.

As a practical matter, any such behavior would provoke an immediate consumer and agency reaction.

The other key concern is “paid prioritization.” Let us be clear. The fear, on the part of some app and content providers, is that this would raise their costs of doing business, as they perceive there actually are technical ways to improve user experience by prioritizing packets.

App and content providers themselves already use such techniques for their own traffic. They know it costs money, and the fear is that they might have to pay such costs, if applied on access networks as well as the core networks where app providers themselves already spend to prioritize packets.

The rules also bar ISPs from (preventing) use of a non-harmful device, subject to reasonable network management that is disclosed to the consumer. It is not clear to me that there ever has been any instance where this issue has even risen, but obviously some device suppliers want the clause added.

Every bit of communications regulation, as with every action by public officials, has private interest implications. Some actors, firms or industries gain advantage; others lose advantage, every time a rule is passed.

Nor is it unlawful for any industry to try and shape public opinion in ways that advance its own interests. It happens every day.

But net neutrality is horrifically complicated. It’s reduction to a bumper sticker slogan is unfortunate. You cannot really solve such a complex problem (it is an issue of business advantage and practices) by ignoring its complexity. “Freedom” really is not the issue.

Business practices and perceived business advantage are the heart of the matter.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...