Tuesday, December 1, 2020

The "5G Race" Storyline is Wrong, As Are So Many Others

The “5G race” story framework seemingly is irresistible: there will be winners and losers, with the winners moving fastest to deploy the networks. The only problem with the storyline is that it likely will prove to be false. 


Does anybody really believe being “first” with analog mobile services, or any of the past digital generations (2G, 3G, 4G) has mattered? Has it changed the economic positions of nations, or industries, beyond what we would expect for other reasons?


In other words, does early or late adoption actually matter? A fair assessment might be that it could matter for industry suppliers, in terms of market share. Some might argue Huawei gained from early supply of some 5G infrastructure, while Nokia suffered. 


On the other hand, the new emphasis on open and virtual networks opens the door for new suppliers, which might make early victories by incumbents irrelevant over the longer term, as new firms enter the supply chain. 


“Early or late” might temporarily provide advantage or disadvantage for particular mobile operators in some markets. But making sense of the advantage also must include the momentum and growth profiles of each firm before 5G. Maybe a firm gains or loses share in 5G because it already had been gaining share in 4G, for reasons unrelated to 5G deployment. 


Among the historical examples of the irrelevance of the early-late paradigm is the development of several technologies in the U.S. market, where adoption always has been “late.” That is said to be true now of U.S. 5G speeds. It is true for the moment, but ultimately the relevant gap will disappear. 


That does not mean U.S. speeds, on average, will be among the top 10 globally, for example. U.S. mobile speeds are slow, and have been relatively slow, for 4G services, compared to many other markets. The point is that it will not matter, in user experience or other expected benefits (for industry, firms, economic growth, innovation). 


But the “U.S. is behind” storyline has been used often over the last several decades. Indeed, where it comes to plain old voice service, the U.S. is falling behind meme never went away.


In the past, it has been argued that the United States was behind, or falling behind, for use of mobile phones, smartphones, text messaging, broadband coverage, fiber to home, broadband speed or broadband price


In the case of mobile phone usage, smartphone usage, text message usage, broadband coverage or speed, as well as broadband prices, the “behind” storyline has proven incorrect, over time. 


Some even have argued the United States was falling behind in spectrum auctions. That clearly also has proven wrong. What such observations often miss is a highly dynamic environment, where apparently lagging U.S. metrics quickly are closed.


To be sure, adoption rates have sometimes lagged other regions. Some storylines are repeated so often they seem true, and lagging statistics often are “true,” early on. The story which never seems to be written is that there is a pattern here: early slowness is overcome; performance metrics eventually climb; availability, price and performance gaps are closed over time. 


The early storylines often are correct, as far as they go. That U.S. internet access is slow and expensive, or that internet service providers have not managed to make gigabit speeds available on a widespread basis, can be correct for a time. Those storylines rarely--if ever--hold up long term. U.S. gigabit coverage now is about 80 percent, for example. 


Other statements, such as the claim that U.S. internet access prices or mobile prices are high, are not made in context, or qualified and adjusted for currency, local prices and incomes or other relevant inputs, including the comparison methodology itself. 


Both U.S. fixed network internet prices and U.S. mobile costs have dropped since 2000, for example. 


The point is that the “U.S. is behind” storyline seems irresistible. But it also ultimately is meaningless. All the relevant gaps were eventually overcome. One possible explanation is that U.S. service providers, who earn high profit margins compared to most other countries, upgrade deliberately, to maintain margins, rather than necessarily rushing to “be first.”


Consumer demand also is an issue. It can be argued that U.S. consumers wait to see value before adopting new technology, instead of rushing to buy the latest technology “just to be early adopters.”


The point is that the “5G is a race” is an irresistible storyline. But it arguably will be proven false. Countries, firms and consumers will adopt 5G when it makes sense, when it offers value, or simply as a byproduct of buying some other product, such as a desired phone model. 


That is, in substantial part, related to another problem journalists face, namely the “next big thing” storyline that becomes news because that is why proponents and vendors are pushing. Many journalists would probably agree that they tire of writing stories about the next big thing, or the present big thing, over and over again. It seems to be an occupational hazard. 


But the point is that easy storylines are irresistible for possibly lazy journalists. To be sure, deadlines create the need for story construction tools, including the venerable “two sides” framework (he said, she said). We all use such tools. 


That is why elections are characterized as horse races, or why verification matters. Still, some might argue that a bit of  laziness is why verification is sorely lacking, or why more original stories are not routinely created. It is not easy to do so routinely. It is harder work. But sometimes it leads to “better” storytelling.

Monday, November 30, 2020

Do We Need Global Optical Standards?

Should specific mandated optical fiber standards be imposed globally, as are Wi-Fi and mobile protocols? It’s a good question, but such optical transport standards, beyond what we already do, are less necessary that wireless or mobile protocols, for reasons directly related to different cost drivers for mobile, untethered and fixed networks.


For 5G mobile networks, for example, core transport represents about 10 percent of total network cost. No matter what we do, we cannot leverage as much value from standards in core optical transport, compared to standards at the edge.


That largely applies even for fixed networks. Traditionally, fixed telecom and cable TV network cost has been concentrated in the access network and edge interface. Perhaps 80 percent of fixed network cost is in the access portion of the network, only 20 percent in core transport and “switching.”


source: GSMA 


The point is that optical transport in the core is a relatively small driver of network cost. Access represents much more. But how much cost reduction or scale can we drive in the optical access network? Some, but not as much as you might think.

Could cost be lower if a single global standard were used for optical access (even if distribution and transport were not standardized globally) networks? Perhaps. But not necessarily.


A single standard for optical access would overprovision in some areas. A single standard might create minimum capabilities that are "overkill" for many customers. That possibly raise costs enough at the margin to make some deployments untenable and thereby improve the business case for rival platforms.


The reasons have to do with the very-different end user and edge cost implications of global standards for wireless, mobile and fixed network devices. Among the best reasons for global standards are the ability to drive down costs, in the core and at the edge.


But it is the edge that really matters, as most communications cost is driven by the edge. Core network choices are affected by volume, to be sure. But core network choices are mostly transparent to edge device costs. 


No matter what choices are made about transport technology, the edge interfaces for every optical fiber network are electrical. Light is demodulated and converted to electrical format (Ethernet) at the side of a home or in the basement of a building, and then frequently from Ethernet to Wi-Fi for local premises distribution. Volume for the optical network elements is necessarily limited, compared to electrical interfaces actually used by end user devices and premises distribution devices.


It does not matter to the edge devices what the core network or even access network optical technology happens to be. The device interface is Ethernet or Wi-Fi. Those interfaces enjoy vast scale and therefore are quite affordable.


So wireless and mobile networks are different. It does matter what the access protocols are, as frequencies are specific and access protocols are specific at the device level.


The cost implications of a mandated global optical fiber access technology are limited to the effect on optical-to-Ethernet interfaces. 


The cost implications of mandated wireless or mobile protocols are much much broader, affecting virtually every connected device. Roughly speaking, as much as 80 percent of end-to-end network cost is at the edge and access network, looking only at the transport networks. 


Total cost, including edge devices and local area networks, is far higher, probably in the 95 percent range of total cost to support communications.

Of course, there are different types of connectivity providers. Not all firms are in the retail business. Many, especially providers focusing on enterprise-only or backbone connections between point of presence, might find that core transport is about half the total cost, while access costs--just to support the points of presence--constitute the other half.


That probably supports the overall point about where cost is generated, however. Even WAN transport providers not serving retail consumers, small businesses and organizations find that access costs are about half of total network-related cost, averaging urban, suburban and rural locations that must be reached. 

The Data Shows Why Fixed Network Cost Control is So Important

There is a good reason why connectivity providers in the retail fixed networks business are focused on cost control: it has become harder to sustain the business case. 


As this data from Ericsson shows, consumers globally have switched to use of mobile networks and devices for voice, messaging and internet access. The retail fixed networks business now relies on internet access, but growth is quite measured. Mobile discrete users outnumber fixed users by a better than 5:1 margin and mobile subscriptions outnumber fixed subscriptions by about 6:1. 

source: Ericsson 


Sunday, November 29, 2020

Customers Pay for the Full Costs of All Products

One important economics principle is that, ultimately, buyers pay all the costs associated with supplying that product, including taxes, fees, import duties and regulatory compliance costs, in addition to the direct costs of manufacturing, marketing and fulfillment you would expect, as well as allowing enough profit to sustain the business long term, while paying all government taxes and fees.


In telecom, customers pay for all support of universal service or support for high-cost networks, for example. One sees similar examples of this rule at work, all the time, in telecom and outside it. 


Delivery app DoorDash, for example, has raised fees for its diners after the city limited what it could charge restaurants, capping DoorDash revenue. The Denver City Council, presumably in an effort to help restaurants, capped the commission that delivery apps can charge restaurants at 15 percent. 


So now DoorDash has added a new Denver fee of about $2 for each delivery, to compensate. As with all products, supply and demand operate. Raising the price of any desired product reduces demand. So higher prices for restaurant meals delivered to homes will likely translate into lower demand for delivered meals. 


The old saw about “no free lunch” applies: any policy--no matter how well-intentioned--that raises the price of a desired product will reduce demand for that product. 


An effort to help restaurants protect revenue by capping delivery charges leads to higher prices for delivered meals, which means fewer sales.


Saturday, November 28, 2020

Antitrust Law Itself Might Change if Regulators Move Against Platforms

In some ways, the focus of antitrust action against dominant platforms might turn not on harm to consumers, which could be difficult to prove for “free” services, but on harm to potential competitors, which has not so much been the case in recent years, but arguably was the case 50 years ago when Brandeis approach was more common, focusing on market structure rather than demonstrated consumer harm.  


The focus, in other words, could shift to an earlier focus on competitive entry and other forms of market structure, rather than on proving consumers have been harmed. Some skeptics might argue this is a bit like arguing “there has been no crime, but we will charge you with one, anyhow, because you are simply too successful.”


So is the issue dispersing private market power or protecting consumers? Is the problem bigness itself? Even if consumer harm is the standard, it often is difficult to prove. Nor is market share necessarily the result of deliberate efforts to constrain competitors. It is often largely the result of network effects


So it seems as though the likely assault on dominant platforms will be based on the older market structure concerns, not so much actual consumer harm. 


The possibility of antitrust action aimed at promoting competition by restricting dominant platform scale in countries ranging from China to the European Union, United Kingdom and United States is growing. 


Efforts to increase user control of their data and complaints about censoring show that a growing wave of concern about monster platform power is not abating, though in practice it is a devilishly complicated matter. 


Few would contest the market dominance in search, browsers, cloud computing, operating systems or advertising. 

source: Wikipedia


Amazon is the leader in e-commerce with 50 percent of all online sales going through the platform. Amazon also leads cloud computing, with nearly 32 percent market share, as well as live-streaming with Twitch owning 75.6 percent market share. 


Some argue Amazon is the market leader in the area of artificial intelligence-based personal digital assistants and smart speakers (Amazon Echo) with 69 percent market share.


Google shares an operating system duopoly with Apple, is the leader in online search (online video sharing (YouTube) and online mapping-based navigation (Google Maps). Google Home has 25 percent of the smart speaker market as well. 


Apple shares a duopoly with Google in the field of mobile operating systems and arguably makes the highest profit of any smartphone manufacturer. 


Alphabet, Facebook and Amazon dominate U.S. digital advertising. In addition to social networking, Facebook also dominates the functions of online image sharing (Instagram) and online messaging (WhatsApp). 


Microsoft continues to dominate in desktop operating system market share (Microsoft Windows) and in office productivity software (Microsoft Office). Microsoft is also the second biggest company in the cloud computing industry (Microsoft Azure), after Amazon, and is also one of the biggest players in the video game industry (Xbox). 


source: MIT Management 


Still, the issue is more complicated than often appears. Market leadership by a small number of firms is common in any industry. That is the rationale behind the rule of three.  


There always is a tension between competition and investment in the capital-intensive connectivity business, for example. But even in the capital-light software and applications businesses, oligopoly seems to reign.


Still, antitrust action to break up big companies has been a staple of competition remedies for more than a century. 


Many have suggested that founding rates for innovative new companies have been depressed for a decade or more because the giants routinely buy them up. So dominant are the leading platforms that their acquisitions of promising new firms creates a kill zone that discourages others from attempting to compete, as this illustration by the Financial Times shows. 


source: Financial Times 


Others might note that the ecosystem for translating basic science into commercial products is not as efficient as it needs to be.  


How to promote innovation and competition at the same time is an issue more regulators and policymakers are likely to grapple with over the next few years.  


Can Edge Computing Really Improve Many-to-Many Video Conferencing?

Does edge computing improve experience for users of video conferencing apps? Specifically, can edge computing improve the quality of all-to-all conferencing sessions? 


It might be claimed that “a relevant use case for edge networks is boosting the quality of video webinars and the reliability of video conferencing calls.” 


Perhaps webinars might benefit. That is a point-to-multipoint communication, for the most part, and edge caching might plausibly improve experience by reducing some amount of latency on the downstream presenter-to-attendee links. But that is an arguable point, at least to the best of my knowledge. 


I’m much more skeptical about edge computing boosting the user experience for video conferences. To be sure, that is possibly true under certain circumstances. When all the users of a specific webinar or video conference call are within the premises, a single local area or a single metro area, edge computing should improve experience. 


It seems unclear whether similar advantages would be possible across large nations or globally, even if local edge caching were available at every end point area, but that might be my own layman’s ignorance. 


Edge caching works best when it is possible to predict what non-real-time content might be requested, which is why edge caching works for streaming video or audio services using content delivery networks. It is quite another challenge to optimize point-to-point and all-to-all communications when the content cannot be predicted, nor the locations. 


In other words, video conferencing is a dynamic and real-time interaction that seemingly benefits little from edge caching. 


Video conferencing is a point-to-point or “many-to-many” session, unlike watching a Netflix video, which can effectively be point-to-multipoint and therefore amenable to edge caching. 


Even when one can predict the day and time of a video conference, perhaps even the universe of potential participants, one cannot “cache” the content, which is live and unpredictable. 


Someone with a deeper understanding of how edge caching actually can improve video conferencing sessions might know how this is possible, but I cannot actually figure out why edge caching improves live “all-to-all” conferencing sessions, with the possible exception of executing the platform software on a device or locally.


More than Half of Data Consumption Might be Advertising

It has been clear for perhaps half a decade that video and video advertising now represent a considerable amount of the data people consume on mobile or fixed internet connections, with estimates running from 10 percent to 50 percent of total data consumption to perhaps 18 percent to 79 percent. 


In 2014, as much as 38 percent of all video content viewed online consisted of video advertising, according to Statista. 



source: Statista 


Some issues and subjects do not get researched or investigated because the sunlight does not reflect too well on private interests. And that can hold true for virtually any entity--public or private, big or small--any bureau of government, churches, social organizations, political parties or candidates, companies or charities. 


To illustrate, there is very little research on the amount of total data consumption on smartphones or other devices that consists entirely of advertising. The reasons are not difficult to fathom: it arguably reflects poorly on the web experience. 


After all, rare is the citizen or consumer who professes to enjoy advertising exposure. People tolerate it because they receive benefits (lower cost or free content, generally). But there also are costs when video advertising data represents a large percentage of data consumption, as most consumers pay for data consumption, and generally in some way related to total consumption. 


That is not to denigrate the value of advertising in supporting user access to valuable applications, services and content. Advertising support always has been an important revenue model supporting content delivery, for example.


Still, it is hard to find data on what percentage of any customer’s total data consumption consists of advertising. But the few studies you might be able to find suggest that more than half of data consumption related to viewing of news sites consists of advertising. In one test, 55 percent of total data consumption was advertising, and much of that was driven by use of video. 


That is especially the case now that consumers watch so much video (video represents as much as 80 percent of total mobile data consumption). 


source: Cisco 


Also, all that video consumption drives online and mobile video advertising volume. Up to 90 percent of advertisers use video for their advertising, some estimate. By some estimates, the average person is now estimated to encounter between 6,000 to 10,000 ads every single day, and a huge percentage of those ads will use full-motion video.


All that explains the usage of ad blocking apps since about 2010, efforts by ad-supported apps to disable or prevent ad blocking, and the rise of web browsers with control over ad insertion. 


source: econsultancy 


An analysis of the 200 most popular news sites (as ranked by Alexa) in 2015 showed that Mozilla Firefox Tracking Protection lead to 39 percent reduction in data usage and 44 percent median reduction in page load time, according to a study sponsored by Mozilla.


The New York Times once found that ad blockers reduced data consumption and sped up load time by more than half on 50 news sites, including their own. 


Journalists concluded that "visiting the home page of Boston.com (the site with most ad data in the study) every day for a month would cost the equivalent of about $9.50 in data usage just for the ads".[3


source: Oberlo 


But the volume of data consumption does affect connectivity provider business models in direct fashion, as it requires the supply of ever-greater capacity, mostly for the same rates historically charged--or lower. At the same time, the benefit of advertising--including users and consumers--does shift almost entirely to application providers. 


But you will not find much research on that issue. It simply does not benefit many powerful interests in the content business, including connectivity providers who also own key content assets and ad-driven revenue models.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...