Many observers suggest the connectivity industry will have to become more disaggregated in the future. unbundling and exposing many features of a connectivity network and allowing third parties access to those features.
That is usually presented as a good thing: a chance to increase value; boost revenues and encourage innovation.
Others might argue that disaggregation--something that has been increasing for at least a couple of decades--has had contradictory effects. Innovation has increased, but arguably mostly by third party apps and firms that now can directly reach users and customers globally (so long as the apps are lawful) without any formal need for a business relationship with a connectivity provider.
One way of illustrating that trend is the “open” way new apps and services are created on IP networks. Since no direct permission is required from the connectivity provider, development by third parties can proceed in a “permissionless” manner.
Value, on the other hand, has arguably been a negative for connectivity providers: legacy provider gross revenues have dropped; profit margins have been squeezed; average revenue per unit has declined; over-the-top has become the foundation of industry app creation and operation.
The term “dumb pipe” is a mostly-accurate description of how TCP/IP, virtual machines, containers and software objects are supposed to operate. The physical infrastructure is logically separate from apps that use such networks.
And many would argue that multiple types of disaggregation have been at work in the connectivity industry, many with negative consequences for connectivity asset perceived value. While observers always argue that “internet access” (connectivity) now is “essential,” those observers also note that connectivity also is increasingly a “basic utility,” akin to electricity, roads, wastewater services, natural gas and other infrastructure.
Natural utilities are characterized by slow growth but predictable cash flow. Also, using the “separation” model, innovation happens higher up in the functional stack. Electricity providers do not innovate.
Device and appliance manufacturers; software developers and others are the entities that create new products using electricity. In the same way, it is third parties that drive innovation of products that require internet access, not suppliers of electricity.
The same basic observation can be made of many forms of disaggregation that continue to develop. Most forms of disaggregation have limited connectivity provider ability to control and profit from new and popular apps.
The one obvious departure is network virtualization, which has the most benefits for connectivity providers in the form of reduced capex and operating costs; lower infra costs and faster network innovation possibilities.
Nor is disaggregation or virtualization a terribly new practice. Some might virtualization or disaggregation to the way wholesale networks unbundle some of the parts of the access network: conduits and other passive infrastructure all the way to various combinations of passive and active functions.
The way modern software is designed, with functional layers, object-oriented programming and containers also are forms of disaggregation and virtualization. The whole point is to enable modular functions.
So when observers call for connectivity providers to disaggregate, they are calling for a modular, building block approach to monetizing network functions.
It is fair to point out that the global connectivity industry a couple of decades ago embraced disaggregation of its business model when it decided that Ethernett and TCP/IP were the next-generation network, not asynchronous transfer mode. Ethernet and IP are more open than ATM, by far.
The main observation is that the argument about disaggregation and virtualization is normally presented as a “good thing” for connectivity service providers. On the contrary, it often has been a “not so good” thing for connectivity provider revenue and profit.
Disaggregation has allowed third parties to reap most of the rewards of faster innovation and application development.
All modern connectivity networks these days, from wide area networks to local area networks (Wi-Fi and all others), are built as computer networks. And what is one notable characteristic of computer networks? The data transport can be logically separated from the apps, revenue models and use cases.
The cost of data transport and networking is lower because of virtualization. Indeed, one argument always advanced for choosing TCP/IP rather than ATM was the vastly-lower cost of IP connectors and devices, compared to the cost of ATM connection alternatives. The argument often was presented as a debate over layer two data link layer protocols. It actually was a debate about value and cost.
Basically, the argument between the computing industry and the telecom industry about the “best” next-generation wide area network was won by the computing industry. That Bellhead versus Nethead debate, which raged in the mid-1990s, implicitly involved choices about network architecture, which had implications for network control or freedom.
The whole existence of the term “over the top” captures the business implications of a decision about layer two networking protocols.