Tuesday, December 19, 2017

IoT Networks Will Be Many-to-One

The architecture of coming internet of things networks is going to be radically different from the voice, consumer internet or business data network architectures. Those networks were built to support “point to point” communications from any node to any other node.

Irrespective of the access mechanism (mobile, Wi-Fi, other), the fundamental networking architecture of a sensor network (and therefore of  the internet of things) is “sensor to server,” not “point-to-point,” as was the architecture of the voice network or the early internet.

That is going to have consequences. The IoT networks will be the inverse of the point-to-multipoint satellite TV, cable TV, radio and TV broadcast networks.

In contrast with point-to-point, any-to-any networks, these broadcast or multicast networks are optimized for one-to-many communications in the downstream direction.

IoT networks will be optimized for many-to-one communications upstream to servers. If there are scores of billions of devices using such networks, then such many-to-one networks will be optimized for collecting data from remote locations, not distributing information to many remote locations, and not for any-to-any communications.

That is new, really new.

Much of the upstream data will be simple: where are you; what is the temperature; what is the pressure; what is your speed. Some of the data might require more bandwidth, supporting camera feeds, for example. But almost none of the requirement is for downstream communications to sensors.




Monday, December 18, 2017

How Big Will Satellite IoT Be?

All executives from mid-size to enterprise firms in agriculture, surveyed by Vanson Bourne on behalf of Inmarsat, believe they will be using  internet of things apps and services within about five years.

Some 85 percent of executives in transportation industries believe they will be using IoT apps and services within five years, and nearly that many in the energy business. In the mining business, perhaps 65 percent of respondents believe they will be using IoT in five years.

You are not likely surprised by such findings. Given the hype around internet of things, it would be a rare executive who thought it “sounded right” to project no use of such capabilities.

Neither are you likely surprised that Inmarsat might sponsor such research. No less than other segments of the communications industry, satellite firms argue they will be relevant for IoT, largely because of coverage capabilities. Basically, that is the newest form of the “no one platform is best for all scenarios” argument.

It remains to be seen which satellite implementations are workable for which IoT apps.

“Despite growing demand for IoT satellite services, the business case for IoT exclusive satellite constellations has yet to be proven, especially considering the exponential growth of LPWANs, LTE-M and NB-IoT terrestrial networks for IoT,” says Alan Weissberger of the IEEE ComSoc blog.

Obviously, geostationary platforms are going to have huge problems supporting ultra-low latency apps. Low earth orbit (LEO) constellations arguably will fit a wider range of missions. Low-bandwidth, non-time-sensitive apps might fit best, for any satellite platforms.

Satellite market researcher NSR predicts transportation apps will be the biggest application.


Broadcast (multicast) has been the geostationary satellite network’s fundamental architecture, so multicast apps will make sense, though that is perhaps not an IoT app at all. In other cases, some argue satellite has a role for connected car communications, possibly for multicast use cases.

Some will note that satellite presently serves a niche role in end user communications, serving ships at sea and remote areas where other networks do not reach. That is likely to be the case for most geostationary satellite future apps as well. It is not yet clear how big a role LEO constellations will have.

Expect to hear a lot of promotion from the satellite industry about that industry’s role in internet of things. It will be small, it is safe to say. Satellite researcher NSR projects satellite industry revenue from IoT at less than $3 billion globally by 2026. And that might be highly optimistic.



Some estimate the global IoT connectivity market will be an insignificant fraction of total IoT revenues by about 2025, NSR projects global satellite revenues from IoT might reach about $3.5 billion by 2026.

In that same year, total global satellite connectivity revenues might represent less than $20 billion in revenue. So NSR estimates IoT will represent as much as 17 percent of total satellite industry revenues.

By about 2025, some estimate mobile operator IoT connection revenue of $22 billion. Total global communications service provider revenues should be about US$1 trillion by then.

Also, projections for service provider enterprise connections must contend with the possible use of Wi-Fi or other access mechanisms which do not incrementally increase connection revenue in a direct sense, if at all.

The bottom line is that satellite IoT is at present as uncertain as mobile or fixed network IoT connection revenue might be.

Sunday, December 17, 2017

"Can't See Forest for Trees" Problem

It is perhaps an understatement to say that “5G” has business model issues. We often think of such challenges in terms of revenue, profit and value, but there is more. Many would argue that the upside to 5G will come from new applications and use cases substantially related to use of such networks by machines, servers and sensors, not human beings connecting smartphones, PCs, tablets, TVs or personal devices.

There is a perhaps-larger understanding of internet of things to play out over the next couple of decades, involving business processes that happen without direct human intervention.

That is significant, not only because the incremental revenues mobile operators can create will happen because sensors and computers need to retrieve data and analyze it, and not primarily because humans will want to talk, text, access the internet or watch video.


Sure, we all hear phrases such as “fourth industrial revolution” that are built on the changes. But that does not actually matter to most people, doing their actual jobs, right now. And that makes it tough to create compelling and insightful content that is deemed relevant.

Among the problems is that a complex set of changes to various ecosystems will happen, building on internet apps, internet access, 5G and other access technologies, cloud computing, big data analytics, robotics, machine learning and artificial intelligence. Others might include biotechnology, materials science and other developments as part of the fourth industrial revolution as well.  

As a normal matter of course, nobody really has a job description embracing all that, just as virtually nobody had job responsibilities directly related to the evolution from agriculture to industrial production; or industrial production to a services economy.

So any more-narrow focus on any one of those areas necessarily misses the broader economic or business impact of the combined set of technologies, processes and capabilities.

When you spend a lot of time organizing conference content, those sorts of big changes are hard to operationalize, since almost nobody actually has a job function directly related to such big changes in business processes.

In fact, our efforts to make content more sensible by breaking it up into constituent parts (5G, cloud computing, AI, big data, internet of things) actually obscures the broader changes.

To complicate matters even more, the changes might only appear first in developed markets, where mobile data and mobile services have ceased to be growth drivers. In developing markets, revenue growth will continue to be seen in mobile subscriptions and mobile data, for some time.

And even there, applications, not access, will drive revenue growth. In other words, 5G will be the first mobile network platform whose business model will live or die based on new apps and use cases for non-human applications, mostly created by third parties.

Those are fundamental shifts. And yet almost nobody has a specific job function related to such changes. That makes difficult the task of creating relevant content that also shows the way forward.  





Friday, December 15, 2017

FTC Has Argued for a "Wait and See" Policy on Net Neutrality

The end of common carrier regulation of internet access returns oversight over internet access services to the Federal Trade Commission, which oversees, with the Department of Justice, antitrust matters and consumer protection.

It is worth noting, though the fact is rarely mentioned in reporting on the change, that the specific change made by the FCC is the end of Title II common carrier regulation of internet access.

Under “normal” circumstances, virtually all providers in the communications business would prefer lighter-touch regulation to common carrier regulation, so support for that change is hardly unusual.

The end of common carrier regulation is said to represent the end of “network neutrality.” That is a complicated matter. In principle, the change means that consumer internet access once again becomes a lightly-regulated data service.

Since common carrier regulation was used as the justification for imposing “best effort only” (first in, first out) handling of consumer internet traffic, that prohibition--imposed by common carrier rules--is lifted.

Assuming Congress does not specifically request that the FCC reimpose the rules, ISPs are free to consider quality of service mechanisms for “internet” access, in the same way they are free to implement QoS for business grades of service.

That worries net neutrality supporters.

So it is worthwhile to recall what the FTC said about its views on competition policy back in 2007, before the move to regulate internet access under common carrier rules, something that never was government policy until 2015.

Basically, the FTC notes the dangers and advantages, and essentially argues regulators should wait and see what happens, as moving quickly, in a fast-changing environment, essentially means regulating without actual facts to consider.

“It is impossible to determine in the abstract whether allowing content and applications providers (or even end users) to pay broadband providers for prioritized data transmission will be beneficial or harmful to consumer welfare,” the FTC said. “Such prioritization may provide benefits, such as increased investment and innovation in networks and improved quality of certain content and applications that require higher-quality data transmission.”

The FTC noted network neutrality proponents raise important  issues, ranging from a diminution in innovation by content and applications providers; the intentional or passive degradation of non-prioritized data delivery and increased transaction costs resulting from negotiations between broadband providers and content and applications providers over prioritization.

But no a priori assessment can be made, the FTC argued. Instead, “evidence of particular conduct” is necessary to determine whether there is harm.

Thursday, December 14, 2017

Why Net Neutrality Discussion is So Difficult

There is a good reason why some find the “network neutrality” discussion so frustrating.

“The most confounding aspect of the contemporary net neutrality discussion to me is the social meanings that the concept has taken on,” says Gus Hurwitz, a professor at the Nebraska College of Law.

“These meanings are entirely detached from the substance of the debate, but have come to define popular conceptions of what net neutrality means,” Hurwitz notes. “They are, as best I can tell, wholly unassailable, in the sense that one cannot engage with them.”

“The most notable aspect is that net neutrality has become a social justice cause,” says Hurwitz. “Progressive activist groups of all stripes have come to believe that net neutrality is essential to and allied with their causes.”

One might argue that such weaponization of an issue is unfortunate. Network neutrality “raises important questions about the nature of regulation and the administrative state in complex technical settings,” he adds.

Beyond that, net neutrality as a concept is very complicated, despite the effort to paint it in simple, caricatured ways. How network management is different from “no blocking” of lawful content is often impossible to clearly distinguish.

Longtime observers of communication networks will tend to agree that no network, as a practical matter, is engineered to support any conceivable level of traffic. That is too costly. Instead, networks are built to support expected, typical peak traffic. That necessarily means spikes in traffic can tax any network.

When that happens, congestion occurs, and user experience degrades. In the old voice network, in fact, actual blocking of access was among the tools used to preserve network performance under unusual load. “I’m sorry, all circuits are busy now; please try your call again later” was a message users sometimes heard at such times.

Keep in mind, actual blocking of attempted lawful communication was among the tools used to manage traffic. Network neutrality principles actually do not permit such practices. The Federal Communications Commission has uniformly acted to preserve consumer access to all lawful applications, without any blocking or interference.

Among the greatest “threats” posed by the end of common carrier regulation is “paid prioritization,” the practice whereby app providers pay transport networks for better quality of service.

The problem, rarely mentioned, is that “almost all of today’s big content providers--the Googles and Netflixes--have invested massively in content delivery networks,” Hurwitz notes. “These are networks that allow their content to bypass almost the entire Internet, dramatically improving performance. In other words, they have already paid for prioritization.”

Latency control is a technical means for improving end user quality of experience, and a routine way of optimizing content and application access. If anybody opposed such quality of service mechanisms, few of us have heard of it.

Ironically, in a competitive internet access market that is growing more competitive; where internet service providers are investing to improve speed and other elements of the experience, the end of common carrier regulation is supposed to lead--inevitably--to major ISPs intentionally degrading their performance, relative to peers, to stoke demand for CDN-style QoS.

Competition is a mechanism that controls such behavior, to the extent that better QoS (with some mechanism for maintaining it)  is viewed as a “bad thing.”

Some argue there is rent seeking at work here. Rent seeking is the generation of revenue surplus by firms with market power. What is not clear, many ask, is whether rent seeking is happening, and if so, “by whom.”

By one common definition, rent seeking is attempted by virtually every entity in the economy.

“One of the most common ways companies in the 21st century engage in rent seeking is by using their capital to contribute to politicians who influence the laws and regulations that govern and industry and how government subsidies are distributed within,” says Investopedia.

In that sense, every industry (and other groups such as trade unions) attempt rent seeking. So rent seeking is a rather useless concept, where it comes to analyzing industry behavior: any industry’s behavior.

AT&T Expects to Have 12.5 Million New FTTH Passings by Mid-2019

AT&T says it now markets its AT&T Fiber (fiber to the home) service to more than seven million locations across 67 metropolitan areas and 21 states.

AT&T plans to reach at least 12.5 million locations by mid-2019. In part, some will argue, that deployment is driven by terms of the Federal Communications Commission clearance of the AT&T acquisition of DirecTV.

Deal approval includes the obligation to deploy optical fiber to 12.5 million new homes. Some might argue (rightly, perhaps) that AT&T will have to stretch to hit that target.

Iit has taken roughly four years to add seven million locations. AT&T has to add 5.5 million more connections in less than two more years. What that means for AT&T’s capital investment
budget is clear: it has had to boost capex beyond prior years.

Some have argued that reaching that level of fiber-to-home household coverage would be difficult. But AT&T has boosted its capex beyond former levels, now spending about $6 billion a quarter on capex.

AT&T has allocated as much as $22 billion in 2017 annual capex, in part to build out the FirstNet emergency responder network.

Some observers will worry about what that level of capex means for overall financial performance, including debt load. But, so far, AT&T seems to have handled it.

Video Entertainment "Market" Now Smashes Separate Regulatory Walls between "Content and Apps" and "Delivery"

The new move by T-Mobile US video streaming business is portrayed by the company itself, and news reports, as representing competition with cable TV.

“The Un-carrier will build TV for people who love TV but are tired of the multi-year service contracts, confusing sky-high bills, exploding bundles, clunky technologies, outdated UIs, closed systems and lousy customer service of today’s traditional TV providers,” said John Legere, T-Mobile US CEO.

A few reports correctly described the service as a streaming offer more akin to over-the-top services offered by AT&T, Dish Network, YouTube and Netflix.

But that might be quite the point. T-Mobile US itself describes its move as representing a move into the $100 billion revenues subscription TV market dominated by cable and telco suppliers.

““We’re in the midst of the Golden Age of TV, and yet people have never been more frustrated by the status quo created by Big Cable and Satellite TV,” said Mike Sievert, Chief Operating Officer of T-Mobile.

The over the top service represents the “successor” service to linear TV, virtually all observers agree. That is why Disney is launching its own retail streaming services, for example.

And that is perhaps among the most-important ramifications of the move. In the application business--including the application businesses traditionally operated by telcos and cable TV--app delivery has been decoupled from the use of access networks.

Relevant competition for cable TV includes satellite and telco services, but also DirecTV Now, Netflix, Amazon Prime, Hulu, Sling TV and other services, with additional competition coming from Facebook, YouTube and many social networking apps.

In other words, the traditional regulatory distinction between unregulated “content or data services” and regulated access service providers is evaporating. Netflix and others create their  own content, bundle and license content and deliver that content.

That makes Netflix (if not a “perfect” substitute) a rival for linear TV subscriptions. The move by T-Mobile US into the OTT video subscription business represents that evolution.

Streaming services might be owned by app providers (social media, YouTube), commerce providers (Amazon), content studios (Sony, Hulu), or distributors (AT&T, Dish Network, Verizon, Comcast).

Whomever the owner of the assets, the new reality is that content creation, packaging and delivery now is becoming independent of the access mechanism. That will--or should--eventually have regulatory implications of major scope.

Defining the scope of a “market” now is more complicated--and much broader--than it once was.

AI Will Improve Productivity, But That is Not the Biggest Possible Change

Many would note that the internet impact on content media has been profound, boosting social and online media at the expense of linear form...