Monday, January 8, 2018

How Fast Does Broadband Really Have to Be?

There now is debate over whether 10 Mbps or 25 Mbps should provide the baseline minimum definition of “broadband.” Leaving aside the commercial dimensions for a moment, the 25-Mbps standard is a bit problematic, as a “one size fits all” definition.

In a larger sense, the floor does not indicate the present ceiling. In most urban areas, people can buy 100-Mbps and faster service if they want it, on fixed networks. Also, speeds only matter in relation to what people want to do with their access.

And speed does not always take care of latency issues, which for some users already is the prime performance issue.

Beyond some ever-changing point, any single user can only effectively “use” so much bandwidth. Whether that minimum is 8 Mbps or some higher number, there is a point beyond which having access to faster speeds does not improve user experience.

For mobile apps, there arguably are few, if any, routine apps used by consumers that require more than about 15 Mbps.

For fixed accounts, there is debate about whether gaming or high-definition video has the most stringent requirements. Some suggest 4 Mbps is enough for gaming. Others think 10 Mbps to 25 Mbps is required.
Activity
Minimum Download Speed (Mbps)
General Usage
General Browsing and Email
1
Streaming Online Radio
Less than 0.5
VoIP Calls
Less than 0.5
Student
5 - 25
Telecommuting
5 - 25
File Downloading
10
Social Media
1
Watching Video
Streaming Standard Definition Video
3 - 4
Streaming High Definition (HD) Video
5 - 8
Streaming Ultra HD 4K Video
25
Video Conferencing
Standard Personal Video Call (e.g., Skype)
1
HD Personal Video Call (e.g., Skype)
1.5
HD Video Teleconferencing
6
Gaming
Game Console Connecting to the Internet
3
Online Multiplayer
4


For fixed accounts, the major variable is likely to be the number of concurrent users, not the actual apps being used at any time. In other words, it typically is multi-user households that require speeds in excess of 25 Mbps.

Basic web surfing and email might require less than 5 Mbps, according to Netgear. Web surfing or single-user streaming might require 10 Mbps.

Online gaming might require speeds of 10 Mbps to 25 Mbps. Beyond that, consumer connections mostly hinge on the number of concurrent users, assuming each concurrent user is a gamer or watches lots of high-definition video.

By some estimates, users heavily reliant on cloud storage might need 50 Mbps per user.

All those estimates probably assume one bandwidth-intensive activity at a time, by any single user, is the pattern. As always, there is a difference between “peak” potential usage and “routine” usage, which will be lower.

Also, it is not so clear how fast the typical fixed connection now operates.

On one hand, average access speeds in 2016 were, by some measures, already in excess of 50 Mbps. So it really does not matter whether the floor is set at 10 Mbps or 25 Mbps. Other estimates of average speed in 2016 suggested the average was in excess of 31 Mbps.  

On the other hand, In 2017, the “average” U.S. internet access connection ran at 18.75 Mbps, by some estimates. If that is true, then the definitions do matter.

Using the 25-Mbps standard, many--perhaps most--common access services--including Wi-Fi, many fixed access connections, satellite access and mobile connections (at some locations and times) are not “broadband,” even if people actually use them in that way.

The definitions matter most where it comes to mobile internet access, which arguably is the way most people actually use internet access on any given day.

Fixed network internet access subscriptions in the United States have declined in recent years, falling from 70 percent in 2013 to 67 percent in 2015, for example.

Some 13 percent of U.S. residents rely only on smartphones for home internet access, one study suggests. Logically, that is more common among single-person households, or households of younger, unrelated persons, than families. But it is a significant trend.

Some suggest that service providers are actively pushing mobile services as an alternative to fixed access, for example.

In fact, some studies suggest that U.S. fixed internet access peaked in 2009, and is slowly declining, though other studies suggest growth continues. Still, some studies suggest U.S.  fixed network subscriptions declined in 2016, for example.

The point is that it is getting harder to clearly delineate internet access by the type of connection. And, until 5G is ubiquitous, mobile, satellite, non-5G fixed wireless and public Wi-Fi speeds will lag.

That, it can be argued, means a single definition does not work for every access method and network. Though 5G likely will change matters, access speed on most networks other than cable TV or fiber-to-home platforms will vary dramatically. And those other networks arguably carry most of the traffic, and represent much of the value of internet access.

That is not an argument for maintaining “slow” access on any network, but simply to note that people use all sorts of networks daily, and most of those networks, while providing satisfactory experience, do not run as fast as fixed networks of the cable TV or fiber to  home variety.

In other words, it arguably makes little sense to define out of existence many access connections that work well enough to support nearly-all the apps and use cases buyers actually want to use.  

In early 2017, the typical U.S. mobile user, for example,  had routine access at speeds ranging from about 15 Mbps to 21 Mbps.



Public hotspot speeds are less than 10 Mbps, according to a study by Ooma. The Hughesnet and Exede satellite services now operate at 25 Mbps, in the fastest retail tier.

That, of course, is the reason some prefer using the 25-Mbps standard: it creates a “problem” to be solved.

But is is a bit problematic when “most connections” able to support nearly-all consumer end user requirements are deemed “not broadband.”

Is Architecture Destiny?

“Architecture is destiny” is one way of looking at the ways networks are able to support--or not support--particular use cases. Coverage, latency and capacity always are key issues. So one reason low earth orbit satellite constellations are important is that such constellations potentially change architecture, changing latency and capacity constraints that traditionally have been architectural constraints for use of satellite networks as point-to-point networks.

On the other  hand, one-to-many use cases are the classic advantage of broadcast networks (TV, radio, satellite broadcasting), in terms of efficient use of capacity. It is hard to beat the cost per delivered bit advantage of any multicast (broadcast) network that is optimized for one-to-many broadcast use cases.

On the other hand, architecture also shapes other potential use cases, beyond the matter of bandwidth efficiency.

Geosynchronous satellite networks have round-trip latency of about 500 milliseconds. That means geosynchronous satellites are not appropriate for real-time apps that require low latency (less than 100 milliseconds).

Where neither latency nor bandwidth is a particular concern, however, most two-way networks could find roles in supporting sensor communications, which are almost-exclusively many-to-one (point-to-point, or sensor to server).

In other words, most two-way networks (not TV or radio broadcast networks or simple bent-pipe uplink networks, including satellite networks supporting TV distribution) can theoretically support some internet of things and machine-to-machine sensor networks.

Many of those apps are not latency dependent, nor do they require lots of bandwidth. Instead, the key real-world constraints are likely to be sensor network element cost and bandwidth cost (cost to move Mbytes).

That, in fact, is the battleground for mobile and low-power wide area networks. The argument has been that LPWANs could move sensor data at far lower cost than mobile networks, in addition to having a transponder cost advantage. Some note that is likely to change over time, with cost differentials narrowing substantially, if not completely.

One way to describe the unique role for 5G is to say that 5G will have unique advantages for real-time apps requiring ultra-low latency or ultra-high bandwidth. Autonomous driving is a good example of the former use case, while augmented reality and virtual reality apps are good examples of the latter, requiring both ultra-low latency and ultra-high bandwidth.

Mobile cloud-based enterprise apps might be an example of new use cases where ultra-high bandwidth is a requirement.

The point is that 5G and IoT use cases will hinge--as all apps running at scale do--on the architectural capabilities of various networks and the cost of communicating over those networks.

Non-real-time apps of any bandwidth can be handled by any number of networks. Content distribution arguably can be supported by both point-to-point and multicast (broadcast) networks.

But ultra-low-latency apps or ultra-high-bandwidth apps arguably require 5G (advanced 4G might work as well).

Low-bandwidth sensor networks arguably can be supported by almost any two-way network in a technology sense, but might vary based on cost-to-deploy and cost-to-use dimensions.

High bandwidth uplinks will work best on bi-directional networks with lots of capacity in the upstream direction, when such networks operate at scale. So long as actual demand is low or highly distributed, more networks could work.


Sunday, January 7, 2018

Telcos and Fintech, Blockchain

Caution is a reasonable attitude for most communications services providers to take towards any of blockchain-related or other fintech ventures, though baby steps already seem to be underway.

The best reasons for caution are based on history. “Telcos” in general have a poor track record of creating sustainable new app or platform businesses with scale, beyond their core access operations.

Also, Blockchain is potentially-transformative financial technology (fintech) development, and tier-one telcos have in recent years tried to create a role for themselves in retail mobile payments, without much success.

Fintech generally includes a huge range of functions and applications, all of which essentially could disrupt banking and financial services:
  • Payments
  • E-commerce
  • Credit
  • Ordering
  • Insurance
  • Savings
  • Banking
  • Risk assessment
  • Accounting
  • Remittances
  • Corporate finance
  • Investing
  • Consumer lending
  • Mortgages
  • Crypto currency
  • Mobile wallets

That noted, some mobile payments and banking services have achieved moderate success. Mobile banking services have proven sustainable in several markets (small business lending and consumer remittances and payments) and countries.  Africa, Scandinavia, Eastern Europe, India and Mexico are among regions where mobile operators have had success with mobile banking and payments.  



But there have been big failures--mostly in other developed countries, where telcos have failed in recent years to get much traction in mobile payments.

All that noted, as access providers who wish to survive and thrive, moving up the stack into new platforms, apps and services beyond connectivity is essential. If fintech, like internet of things, proves to be a huge growth area, telcos are almost forced to consider how they will become a bigger part of those ecosystems.



Saturday, January 6, 2018

Share of Wallet Shifting to Devices, Apps

One consequence of the “telecom” industry now being a part of the broader internet ecosystem is a shift in industry share of profits. In other words of consumer spending on “telecom” products and services, share of wallet has moved to devices, apps and services.

And most of the consumer spending growth has shifted to devices and over the top apps (such as Apple and Netflix). In at least a few markets, the share gains by OTT services has been dramatic.

The other shift, in some markets like the United States, is a shift of market share from legacy providers to newer challengers (such as cable TV operators).

`



Era of Pervasive Computing Shapes Communications Revenue Drivers

Eras of computing matter for telecom professionals and the broader telecom industry for any number of reasons, but chief among the implications is that computing eras create, shape and form demand for communications.

The era of pervasive computing, which is likely to supplant the era of mobile computing, provides an example. At one level, the idea that computing devices will be embedded all around people implies communication as well. And since sensors and pervasive computing devices (things) will vastly outnumber people, that suggests a lot more communication connections.

But computing eras also shape other parts of life, such as who and what needs to communicate, over what distances, in what manner, how often and with what bandwidth requirements. Those issues in turn create potential demand for revenue-generating services, features and apps.

There are many ways to characterize eras of computing, but it is safe to say that the present era is the second of perhaps five eras where communications is essential for computing, since computing is largely accomplished remotely.

In other words, “everything” is networked and connected.



In the era of personal computing and use of the web, that mostly meant connecting PCs with remote computing facilities. In the cloud era, we reach a new stage where “most” applied computing tasks are partially, substantially or nearly-completely conducted remotely, making communications a necessary part of computing.

In the present era, demand for communications to support computing has been driven by global adoption of mobility, plus mobile data, plus video and other internet apps.

In the next era, communications demand will be driven by internet of things sensors and other forms of pervasive computing.  For communications providers, that is the good news.

The bad news is that in the era of pervasive computing, not every instance of communications necessarily generates incremental revenue. We already see that with Wi-Fi, Bluetooth and other forms of local and short-distance communications.

Nor, in the pervasive era, is it possible for any access provider to directly profit from most of the applications that use a network. Potential revenue exists in increased demand for wide area communications and therefore local connections to such networks.

But the relationships are far from linear. Basically, incremental revenue grows less robustly than increased data usage, and threatens to grow far more slowly than network capital investment.

That is among the key challenges for the “dumb pipe” internet access function. That is not to say the only revenue drivers are dumb pipe internet access. Access providers do provide owned applications (messaging, voice, video). But those legacy sources are either declining or morphing, with new suppliers providing effective substitutes.

That is why surviving retail suppliers must “move up the stack” into owned apps, platforms and services.

Was Negroponte Wrong?

Lots of things can change in four decades. The Negroponte Switch, for example, suggested that legacy networks made inefficient use of bandwidth. Broadband signals (at that time television) were moved through the air using wireless, while narrowband signals (at that point mostly voice) were moved using cables.

There was another angle, namely that mobile and personal endpoints (phones, at that time) were perversely limited to static fixed network connections, while devices that functioned in a “place-based” manner (television sets) were connected using wireless.

Prof. Negroponte argued we should do just the opposite, namely move narrowband and mobile signals over the air, and confine broadband to cables.

These days, the switch is really from cabled to wireless and mobile, since most traffic now is broadband, and increasingly that traffic is mobile and personal. By perhaps 2019, as much as two thirds of all data will use some form of (mobile, Wi-Fi or other untethered networks, short range or long range), Cisco has predicted.


Of course, assumptions matter. In the 1980s, it would have been impossible to foresee the huge explosion of mobile usage; the shift of TV consumption from place-based to mobile (and from linear to on-demand); the decline of fixed network voice; or the rise of the internet itself.

Nor would it have been possible to accurately foresee the impact of orders of magnitude decreases in the cost of computation and communications. Rather than a shift, we now see a move of virtually all communications to untethered modes.

These days, Wi-Fi is the preferred local connection technology in the office, home and indoor venues. Outdoors and on the go, mobile connections are the norm.

In the new developing areas, such as internet of things apps and sensors, untethered access also is expected to be the norm, not fixed access.

Negroponte was correct--within the limits of networks and costs at the time--in suggesting a shift of broadband to cables and narrowband to wireless.  

Some 40 years later, everything--all media types--are moving to untethered access. That is the result of mobility emerging as the dominant end user device, the growth of Wi-Fi as the universal endpoint connection method, the impact of Moore’s Law on computing and communications costs, the growth of the internet and ever-new ways to use communications spectrum more efficiently.

In the case of millimeter wave and spectrum aggregation, cheap computing means we can use bandwidth assets that were impractical in the past.

Computing power that would have cost $100 million in 1980 cost about $100 in 2010, less than that in 2017. In other words, costs have dropped at least eight orders of magnitude.

Predictions about the future always are perilous. What we have seen is partly a switch, but more profoundly an upheaval. Increasingly, the untethered and mobile networks are for actual access. The fixed network is--in the realm of consumer services--a backhaul network.

Friday, January 5, 2018

Technologies of Freedom

It now is a given that media, communications, content and delivery are converging, erasing former clear lines between industries and functions. That has important consequences.

And some ideas, even if abstract, really do matter, in that regard. Freedom, responsibility and fairness always come to mind, given my academic training in journalism. It now appears, for a variety of reasons having nothing much to do with those ideas, that freedom is imperiled.

Ironically, it is the leading app providers that now face threats to their freedom, as there are growing calls to “regulate” them in greater ways, globally.

Let me be clear: my own position has for decades been that more freedom for all in the ecosystem works best, and is the preferred approach to policy. Those of you who ever have read Technologies of Freedom will understand why.

Responsibility and fairness also are requirements, but something that has to happen at the personal, firm and industry level. Yes, this can be done “to” people, firms and industries, by government fiat. But freedom is the preferred course.

In a world where formerly-distinct endeavors and industries really are converging, we have a choice: extend freedom to former highly-regulated entities who now operate in entirely-new realms where freedom is the policy (First Amendment protections), or remove freedom from content providers and make them “utilities.”

The bigger challenge right now is getting the transition right. Somehow, we need to balance regulatory models, away from “utility” and “common carrier” regulation for app providers, but also away from such regulation for firms that now participate in activities that increasingly are inseparable from traditional First Amendment protected ideas, content and media.

At the same time, major app providers already operate as “access providers,” though without the obligations imposed on only a few access providers.

Some now are arguing essentially for “less freedom” for Facebook, Google, Amazon and others, and “even less freedom” for access providers who--despite becoming content providers at their core--deserve freedom no less than any other content provider.

The better policy is to extend the realm of freedom further, not restrict it. In other words, when harmonization is required, it is better to extend freedom to formerly-distinct industries (broadcast TV and radio, cable TV and other distribution entities, even telcos).

Yes, debates about First Amendment protections are abstract. But they are fundamental and consequential, when our old ways of regulating (freedom for media; some regulation for broadcast; common carrier for telcos) need changing, as the spheres converge.

We can take away freedom, or we can extend it. As argued in Technologies of Freedom, more freedom is the better course.

Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...