Thursday, May 12, 2022

The Internet Already is "Balkanized"

International communications have always been borderless in a controlled way. Both governments and service providers have guidelines about how and when communications can cross borders. 


Oddly enough, there are more barriers in the internet era, in large part because governments always have controlled the movement of content across borders.


So  regulation might be a bit different in the internet era, but it has not gone away, by any means. In fact, more barriers exist, compared to the days when only voice crossed borders. 


Governments routinely block some content, a process that has been obvious for some time.


source: Freedom House 


The internet is not borderless.


How Will We Know FTTH and 5G Have Succeeded?

How will we know either fiber to the home or 5G has been successful? As with anything else, what one hoped to achieve matters. As always, there are many possible answers: minimum case and best case.


For policymakers, either FTTH or 5G policy succeeds once the networks are built. Even for service providers, FTTH and 5G succeed if they enable fixed and mobile network operators to remain in business: they are competitive with other service providers, for example.


Policymakers can claim victory if the networks are built. Nothing else really has to happen.


Service providers win if they are not driven out of business; if they maintain or slightly increase revenues; if they maintain profitability.


Infrastructure providers win as they are able to fuel sales for one more generation of networks. App providers win as the new networks help create more room for feature and performance advantages.


Consumers win if their experience includes lower latency and high speeds, for about the same price as the older platforms supplied.


There are likely to be some new use cases and revenues, to be sure. But none of those use cases will match the importance of the "minimum" outcomes outlined above.


One must ask what the “real value” is from several vantage points. Most fundamentally, mobile operators need more capacity to keep up with growing customer data consumption, only some of which can be satisfied by the other traditional mechanisms of creating smaller cells, using better radios, offloading traffic and shaping demand. 


From a mobile operator’s perspective, 5G arguably succeeds if it simply allows operators to keep pace with end user data demand, while operating with higher cost-per-bit efficiency.


In other words, 5G succeeds if it allows mobile operators to supply more data at about the same retail price. Some might complain about the need to keep reinvesting, but that is foundational to the business. 


About every decade, we have found, a new block of spectrum is required and a new network has to be built. 


In part, that is because of saturation of the available capacity on the existing networks and in part because technology advances typically provide an order of magnitude improvement in capacity and latency performance with every next-generation network. 


All that helps mobile operators maintain or improve their core businesses in terms of revenue, profit margins, equity prices and market share. Fundamentally, 5G is about “keeping your business.” 


Of course, there are other ways to look at outcomes. How do mobile operators and others in the untethered ecosystem convince regulators to release additional spectrum? Economic growth; protection of supply chains; support for domestic industries; consumer and business benefits; educational and social benefits and jobs are outcomes governments want, so mobile operators almost have to argue that 5G supports such outcomes. 


Perhaps looking at 4G will help with the assessment. Questions about payback were relatively frequently expressed early in the global rollout of 4G mobile networks, even as proponents emphasized possible new applications. 


source: TechTrained 


Capacity and lower cost per bit always seem to be among the key values of any next-generation mobile network, even if such platforms are pitched as “new application” drivers. The reasons are fairly pedestrian. 


To convince regulators to allocate significant new spectrum, mobile advocates must emphasize all the potential benefits for economic growth, national security, innovation leadership and so forth. 


It simply does not sell well to argue “you need to give us more spectrum so our business models remain intact” or “so we can lower our cost per bit.” 


Nor should we discount the eventual emergence of new apps and use cases made possible because of lower latency and faster speeds. 


But we do not need 5G to do anything more than sustain the existing mobile business model to claim success. All that other hype about new apps is simply the best argument mobile operators have for convincing regulators to release more spectrum. 


Infrastructure suppliers, likewise, have every incentive to convince mobile service providers that they are in danger of falling behind, or missing out on revenue growth, if they do not upgrade. 


In such circumstances, we can simply assume that 5G succeeds if mobile operators still have a positive business case. Everything else is nice, but not essential in the short term.


Rural Broadband Quality is an Issue, if Not the Issue Policymakers Often Suggest

A survey conducted by the U.K.-based National Innovation Centre for Rural Enterprise (NICRE) differences confirms what virtually everyone would acknowledge: rural broadband networks are perceived to be worse than urban networks in terms of speed, for example. The NICRE also argues that inferior broadband also reduces rural firm resilience


source: NICRE 


The obvious solution to this problem is to improve the quality of rural broadband. That should be done, of course. But the claimed upside from such improvements is hard to quantify. 


The problem is that rural areas virtually always have more of some attributes and less of other attributes that correlate with economic growth. 


Rural areas have fewer businesses, fewer jobs, lower wages and longer transport distances to urban areas. That lower density of activity is directly reflected in opportunities to foster economic growth.


source: Internatiional Labor Organization 


Rural areas also tend to have lower average household incomes, lower educational attainment and  older average age profiles. All of those attributes are correlated with lower use of technology in general. 


The point is that rural area broadband quality does tend to be lower than what is found in urban areas. But so are most other measures of economic activity. Even if rural broadband were, in every respect, identical to what is found in urban areas, those other correlations would continue to exist. 


Even if quality broadband were to eliminate virtual good distance issues, broadband would not eliminate the distance issue as it pertains to all physical goods. And that would still shape rural area ability to improve economic growth. 


Nobody would likely argue that quality broadband makes rural life worse. Policymakers always argue quality broadband makes economic growth or rural life better. But it remains plausible that even quality broadband will do little to boost economic growth in rural areas.


The reason is that the drivers of rural underdevelopment are not created or significantly hindered by broadband quality. Low population density, logistical distance, educational attainment gaps and lower household wealth and the movement of young people to cities all combine to limit growth opportunities. 


Better broadband is not going to change that much, if at all.


Wednesday, May 11, 2022

Pareto Theorem Suggests Where and Why Millimeter Wave Spectrum Will be Useful

Pareto distributions--often colloquially referred to as the “80/20 rule.--are common in business, technology and nature.


Most of us are familiar with the 80/20 rule, which suggests that roughly 80 percent of value or outcomes are generated by about 20 percent of actions. Formally, it is the Pareto theorem

Virtually nobody would be surprised if told that the highest data demand in the U.K. mobile services market comes from areas such as London, Manchester or Glasgow, which are major population centers. 


What might be more surprising is that cell site data demand is about as disparate as the population data would suggest. According to Ofcom, the U.K. communications regulatory body, the largest 20 cities, containing 32 percent of the total U.K. population, cover about 2.4 percent of the surface area. 


source: Ofcom 


In fact, cell locations and data usage tend to show a Pareto distribution. Pareto would suggest that about 80 percent of mobile data usage is generated by 20 percent of the locations. 



source: Medium 


Pareto applies to most aspects of the connectivity, data center or computing businesses. It even applies to revenue generated by mobile cell sites. Half of mobile revenue is driven from traffic on about 10 percent of sites. Fully 80 percent of revenue is driven by activity on just 30 percent of cell sites. 


source: Ericsson 


Pareto also applies to mobile operator and telco revenue, profits, accounts and cost.  

source: Telco Strategies


That is clear in the distribution of customer accounts, ranked by revenue potential.


source: B2B International


source: Ofcom 


That Pareto distribution of data usage also shows where and why millimeter wave spectrum will prove useful. The skewing of data demand in a relatively small number of dense, urban areas suggests millimeter wave’s capacity advantages will prove most valuable there, as Verizon has argued. 


source: Verizon 


Tuesday, May 10, 2022

Much Web 3.0 is Simply the Natural Evolution of Web 2.0

What do observers believe “decentralization” actually means when talking about Web 3.0? What is sought is an internet not dominated by a handful of large app providers. But many--perhaps most--features said to be characteristic of Web 3.0 arguably are not foundational changes. 


Some might say Web 1.0 was “read only,” whereas Web 2.0 was “read-write.” In that sense, Web 3.0 is “more write,” albeit with a significant chance that reach or influence might not change all that much. Think of the notion of “everyone may speak” and how that is different from “who has something to say?”


In other words, unlimited ability to post does not mean unlimited ability to “get attention.” Large platforms tend to make a difference in that regard. So the issue is whether the “creating an audience” function can be supplied in decentralized fashion or not, without the mediation of a platform. 


The corollary is whether an effective platform can exist without ownership. There is a difference between “using a mechanism” and “owning a mechanism.” Many argue Web 3.0 alters “ownership rights and mechanisms.” at scale. 

source: Lizard Global 


If one analogy is content creation, then Web 3.0 promises a way for content creators to monetize in a more-direct way. But popular content never has surfaced and propagated without the use of platforms that curate content. Whether that changes because of distributed security and payment mechanisms is not so clear. 


The issue is whether decentralized curation can scale. 


That might not wind up being the case. Even when individuals have more control or ownership over their data, value might be created by platforms that allow the “most valuable” data to propagate. And that is precisely what Web 3.0 proponents seem to oppose: the creation of large intermediaries and platforms. 


As envisioned, Web 3.0 would operate more like a peer-to-peer network, with computing resources scattered widely and without gatekeepers.


To be sure, some foundational “distributed” features are seen” blockchain; crypto currency assets and public key security. Some also see the ability to develop apps without much--if any--coding knowledge. 


Some applications could--or should--include banking, presumably the ability to conduct transactions more directly, without “middlemen” such as financial institutions. That would be a classic example of “disintermediation,” by definition the removal of distributors from value chains. 


source: WallStreetMojo 


Other applications do not seem intrinsically related to Web 3.0 “decentralization,” though. Use of augmented reality sometimes is said to be an attribute of Web 3.0, but that is likely to happen in any case. 


Some might argue that the use of “digital twins” is a Web 3.0 development, but others would argue that will happen anyhow, and is not intrinsically produced only by Web 3.0. 


The same might be said of artificial intelligence, cloud or edge computing and big data. Sometimes cited as examples of Web 3.0 experience. Obviously, the countervailing notion is that those developments already are coming, but not necessarily requiring a new internet architecture. 


The use of peer-to-peer transactions, which blockchain will help facilitate, seems among the few concrete examples of how Web 3.0 would operate, in terms of value exchanges. 


The point is that we cannot yet say how different any Web 3.0 might be. Experientially, the low-bandwidth, character-based internet offered a vastly different experience than the image, video and sound-based Web, able to support robust e-commerce features. 


As the Web evolves to incorporate artificial intelligence, virtual and augmented reality, it is possible, though not inevitable, that platforms could be substantially eliminated, at least for some operations, such as payments. 


Still, it seems a bigger stretch to argue that large and dominant platforms will be eliminated by distributed transactions, for example. The value of marketplaces (platforms) is precisely the richness and density of potential buyers and sellers. Whether the relatively frictionless experience provided by a large marketplace can be replicated in some decentralized way is the issue. 


Easier to predict is the growth of “trust” mechanisms that will protect buyers and sellers from fraud, as that is a primary attribute of blockchain mechanisms. 


source: Fabric Ventures 


Sunday, May 8, 2022

How Much Can Metaverse Change?

Many argue that the key difference between today’s internet and tomorrow’s metaverse is the architecture. Many argue that the metaverse will be “open,” compared to today’s “cl;osed” internet. 


Others might argue that the difference is metaverse “decentralization” compared to today’s “centralization.” 


source: a16z 


That seems a hope that will not, in the end, actually be realized as its proponents intend. To the extent that the metaverse creates property rights--and nobody denies that property rights will be created--is there historical precedent for such rights to be held in mostly-decentralized fashion?


In other words, will there not be rights holders with more volume than most others? In other words, in any mature market, do leaders not emerge? Will users and customers not gravitate to those products deemed “best?” 


If the goal is to prevent the emergence of gatekeepers, is that realistic? Are we not confusing “degree of centralization” with “degree of power” or “degree of influence” or “share of market?”


To use an analogy, it is one thing to propose creation of a “classless society.” It is quite something else to actually create it. In fact, both decentralization (people have formal rights) and centralization (some have more power and wealth than others) tend to coexist. 


Formally “open” systems can result in “unequal” outcomes. Users might have choices, but platforms can still exist. Both direct and mediated interactions can operate at the same time. 


Of course, there is another historical precedent. Revolutions happen. But classless societies do not result. One set of rulers is exchanged for another. The identities of the “ruling class” will change; but society does not become classless. 


So even if metaverses do not eliminate gatekeepers and platforms, there is a reasonable chance they will create new gatekeepers and platforms.


Friday, May 6, 2022

Why is EU Looking at App Provider Payments to Telcos?

Though the outcome remains unclear, European Commission policy makers will be looking at new regulations requiring a handful of big application providers to pay telcos for internet access investments, the stated rationale being that a few app providers represent 56 percent of capacity demand. 


At least in part, the proposed rules are intended to address lagging broadband capacity and the buildout of 5G. Why the access business appears relatively unprofitable is the issue. 


The answers include both the shift from monopoly to competition and the advent of the internet, both of which have arguably damaged telco revenue models. 


The global shift to competition as the framework for telecom services has had a dramatic effect on business models, severely challenging supplier profit margins and  cutting the share of market any competent supplier can expect. 


At the same time, the internet has led to a gradual diminishing of the “applications” role and its replacement by a “dumb pipe internet access” role that offers far-less opportunity for adding value and sustaining profit margins. Few--if any--popular apps now are created and owned by telcos or other competing connectivity providers. 


In other words, telcos once exclusively created and sold “voice services” and had a legal monopoly on the creation of any other services sold to customers. 


A smallish data transport business existed, and it produced high profits. But key to the business model was the sale of an application. Customers were not charged for use of the network, in a direct sense. 


Compare that to today’s model, where the revenue model is driven in precisely the opposite way: customers are charged for use of the network in a direct sense (internet access) but not for specific applications, which are supplied by third parties. 


Likewise, prices once were charged based on distance and volume: higher prices were charged for connections further away, and as well as by minutes connected. 


These days, distance does not matter. And “volume” is less directly related to pricing. Often, virtually unlimited usage is allowed in exchange for payment of a flat fee. 


All of that contributes to the business model stress connectivity providers now experience. 


source: ETNO 


Consider the impact of competition on potential market share. In the monopoly era a telco could theoretically capture nearly 100 percent of potential demand. All that changes with lawful competition. In mobile markets, three or four contestants are common. That reduces the potential market share of even a share leader to perhaps 33 to 40 percent. 


Much the same happens in fixed markets, where facilities-based competition or mandatory wholesale is the regime. The market share held by the leader will be a fraction  of what was possible in the monopoly era. 


All of that suggests that, in some markets, the number of competitors will decrease; wholesale mechanisms will become more important or perhaps even a return to monopoly in some form will be necessary.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...