Friday, December 9, 2022

No 5G New Revenue Source Matches Fixed Wireless

It is virtually impossible to argue with the proposition that the first new revenue stream of any significance enabled by 5G is fixed wireless internet access. Revenue from fixed wireless far exceeds revenues from other potential new revenue sources. 


Revenues from 5G fixed wireless, in the near term, will dwarf internet of things, private networks, network slicing  and edge computing, for example. 5G fixed wireless might, in some markets, represent as much as eight percent of home broadband revenues, for example. 


None of the other potential revenue sources--network slicing, edge computing, internet of things, private networks--is likely to hit as much as one percent of total service provider or mobile service provider revenues in the near term. 


As important as edge computing might be, as a revenue growth driver for mobile operators, revenue contributions might be relatively slight for some time. The same might be said for the revenue contributions made by internet of things services as well.


In 2024, it is conceivable that IoT connectivity revenues for mobile operators globally could  be in the low millions to tens of millions of dollars, according to Machina Research. Millions, not billions. 


In 2026 the global multi-access edge computing market might generate $1.72 billion. Even if one assumes all that revenue is connectivity revenue booked by mobile operators, it still is a far smaller new revenue stream than fixed wireless represents. 


If the home broadband market If the home broadband market generates $134 billion in service provider revenue in 2026, then 5G fixed wireless would represent perhaps eight percent of home broadband revenue. 


Do you believe U.S. mobile operators will make more than $14 billion to $24 billion in revenues from edge computing, IoT or private networks?


Nor might private networks or edge computing revenues be especially important as components of total revenue. It is almost certain that global service provider revenues from multi-access edge computing, for example, will be in the single-digit billions ($ billion) range over the next few years. 


Though a growing number of home broadband subscribers should have access to gigabit speed fixed wireless service eventually, present coverage is relatively nil. 


T-Mobile and Verizon are expected to have 11 to 13 million total fixed wireless customers by the end of 2025. If total U.S. internet  accounts are somewhere on the order of 111 milliion accounts, and if small business users account for 11 million of those accounts, then home users might amount to about 100 million accounts. 

source: Ooma, Independence Research 


If Verizon and T-Mobile hit those targets, their share of the home broadband market--counting only fixed wireless accounts-- would be about 10 percent. Significantly, most of those accounts will be gained “outside of region” for Verizon. That is significant as Verizon’s fixed network only reaches about 20 percent of U.S. households. Fixed wireless allows Verizon to grow its account base among the 80 percent of U.S. home locations that cannot buy Verizon fixed network service. 


For T-Mobile, which in the past has had zero percent market share in home broadband, all of the growth is incremental new revenue. If those accounts add $600 per year in added revenue, then the 11 percent share of home broadband supplied by fixed wireless represents perhaps $6.6 billion in new revenue for the two firms. 


That is a big deal, considering how hard it is for either firm to create a brand-new line of business that generates at least $1 billion in new revenue. 


The other apparent takeaway is the size of the market segment that cares more about price than performance. 


Segments exist in the home broadband business, as they do in many parts of the digital infrastructure and digital services businesses. In other words, even if some customers want faster speeds at the higher end of commercial availability, up to 20 percent of the market cares more about affordable service that is “good enough.”


The center of gravity of demand for 5G fixed wireless is households In the U.S. market who will not buy speeds above 300 Mbps, or pay much more than $50 a month, at least in the early going. T-Mobile targets speeds up to 200 Mbps. 


During the third quarter, about 22 percent of U.S. customers bought service at speeds of 200 Mbps or below. In other words, perhaps a fifth of the home broadband market is willing to buy service at speeds supported by fixed wireless. 


source: Openvault  


The other takeaway is that home broadband net account additions over the past year have disproportionately come on the fixed wireless platform, representing about 78 percent of all net new accounts. 

source: T-Mobile 


To hang on to those accounts, Verizon and T-Mobile will have to scale speeds upwards, as the whole market moves to gigabit and multi-gigabit speeds. Some percentage of those upgraded accounts could come on some fiber-to-home platform. Even out of region, both firms could strike deals for use of wholesale assets. 


But the bigger part of the retention battle is going to center on ways of increasing fixed wireless speeds to keep pace with the ever-faster “average” speed purchased in the home broadband market. 


And that almost certainly means using millimeter wave spectrum to a greater degree. In the end, even using small cell architectures, there is only so much capacity one can wring out of low-band or mid-band spectrum. 


As a practical matter, almost all the future bandwidth to support mobility services or fixed wireless is to be found in the millimeter wave regions. 


Thursday, December 8, 2022

In the WAN, the Next Generation Network Can Take Quite a While to Make Sense

As 400 Gbps wide area network systems now are touted, one thinks back about 20 years to another time when service providers were weighing different solutions for their WANs. 


Back then, when the WAN standard for long-haul optical transmission was 2.5 Gbps, WAN operators were pondering the cost and value of upgrading to either 10 Gbps or 40 Gbps.


As I recall, back around 2000 the cost of upgrading to 10 Gbps was about 2.5 times the cost of greenfield 2.5 Gbps networks.


Again, as I recall, 40 Gbps networks cost more than four times the cost of 10 Gbps. That made the decisions in favor of 10 Gbps--especially given the amount of dark fiber availability--logical. 


Assuming the same pattern holds, WAN operators needing to add more capacity will opt for 100 Gbps networks once the price premium over 40 Gbps is about 2.5 times. 


How long it will take for the economics of 400 Gbps to reach levels where it makes more sense, for most service providers, rather than a 100 Gbps upgrade is the issue. 


As always, the economics are easier for metro networks and within data centers. But long haul upgrade economics are more difficult. 


As always, “how long can we afford to wait?” affects the decision. Recalling the huge amount of dark fiber put into place around the turn of the century, the other issue is whether it makes more sense to continue using 10 Gbps on multiple fibers rather than upgrading either to 40 Gbps or 100 Gbps in the WAN. 


Unlike baseband standards, which tend to increase by an order of magnitude each major generation, optical transport systems often do  not, in part because of the technical issues for optical waveguides such as  chromatic dispersion and cost issues such as port density. 


That was an issue 20 years ago when network operators were looking to upgrade capacity to 10 Gbps or further to 40 Gbps, for example. Typically, backwards compatibility tends to be a bigger issue for long-haul and access network operators than for operators of data centers. 


And a jump from 1 Gbps to 10 Gbps was easier to finesse than a leap to 40 Gbps. The consideration often involves an upgrade over existing optical cabling networks that disturbs existing operations the least possible amount. 


Often, an order of magnitude leap in bandwidth requires quite a lot of network element replacement, and therefore higher cost. Local networks in the past used multimode fiber, while long-haul networks use single mode fiber. 


That has cost implications. The cost of using a four-by-10 Gbps solution is roughly four times as much as a 10 Gbps solution, for example, on multimode networks. On a multimode, short-range network, a  10X solution (10 Gbps upgrade to 100 Gbps) costs about 10X more. 


On a single mode, long haul network, the cost of upgrading from 10 Gbps to 40 Gbps is 4X. But the cost of upgrading to 100 Gbps is far more than 20X. 

source: Cisco


The reason, at a high level, is that the upgrade by 4X generally uses some form of multiplexing the older existing standard, but using more fibers in a cable. A multimode fiber network upgrade might involve only a switch of line cards. 


On such local networks, where a 10-Gbps uses two fibers, transmission on a 40-Gbps multimode fiber network uses as many as 12 fibers in a cable, for example. An upgrade from 10-Gbps to 100 Gbps means upgrading from a two-fiber cable to a 24-fiber cable. 


A 10X upgrade tends to be less of an issue for local users such as data centers or even metro networking suppliers but a much-greater issue for connectivity providers with comparatively greater sunk investments in optical cabling infrastructure. 


Cost issues always matter, which is why, at transition points, service providers often are asked to choose between one option generally available now and a higher-performance option expected in a few years. 


At least, that is what I seem to recall from past evaluations made by WAN operators about core network upgrades. 


Wednesday, December 7, 2022

Is Reliance on Public Cloud Dangerous for Telcos?

How big a threat are hyperscale cloud computing suppliers to the connectivity service provider customers they serve? Not much, if at all, some argue. Others think the danger is significant. 


"Let's be honest, they could run the whole network for probably half the cost," said John Giere, Optiva CEO. Giere seems to be referring only to the cost of server resources, though. "Sixty-five percent of servers are bought by five companies in the world.”


The implication is that the sheer cost of compute infrastructure could be much lower than any single connectivity provider could obtain. 


As always, the analysis of total cost is much more complicated than the cost of acquiring server capacity to run software. There are personnel costs, software costs, maintenance, power and redundancy costs, for example. 


And the complex analysis arguably turns on value. Would you prefer a 50 percent return on investment on an investment that improves 5 percent of your cost structure, or a 25 percent return on investment on an investment that improves 20 percent of your cost structure? 


In other words, connectivity providers have to understand where their costs are created. 


Consider thinking about 5G-based edge computing networks. Some might see costs centered in the core network while others believe it is at the edge. As networks become more virtualized and disaggregated, those perceptions might prove equally correct or incorrect, no matter which view presently holds. 


Perhaps the biggest difference right now is thinking about return on investment in core networks, as, by definition, much network investment for edge computing must happen “at the edge.” 


source: IBM Institute for Business Value 


At some volume level, the economics of using public cloud services on an outsourced basis rather than running owned compute infrastructure might even reverse. For enterprises, at some level of volume, it virtually always makes sense to buy and own rather than lease and pay for services. 


But there are other considerations. Some might say the task of managing the network is not where the value lies. Rather, it is in the ability to tap the latest and most-advanced compute capabilities to build new services at the customer-facing level. 


If you think about the problem as a matter of operations support systems--not strictly “compute resources”--you get a glimpse of the difference between a platform to “run the network” and a platform to create new services. ” 


Simply all operations support and compute platforms exist to support business outcomes, but those outcomes also hinge on go-to-market skill.


Simply, one cannot create a customer-facing service unless the underlying network can support it. But creating a new customer-facing service relies on much more than the compute or connectivity platforms. 


Much of the effort and skill centers on domain knowledge, code-writing capabilities, marketing skill and an internal organization that does not slow down or block such development. Add capital resources, depth of ecosystem partners, a lack of distracting other issues and organizational agility to that list of requirements. 


In other words, using a hyperscale partner as the compute platform is less about the cost of managing the core network and more about  leveraging a platform for creating new services that require the use of the network, at the “my platform can support that” level. 


Such capabilities arguably apply most to services enterprise customers value, deploy and buy. That applies to capabilities enterprises themselves require for internal consumption as well as the products they create and sell to consumers. 


Perhaps the biggest connectivity provider fear is losing control of key capabilities in the network technology or customer-facing markets. That is not an insignificant risk. 


Still, in an increasingly open, disaggregated, layered ecosystem of value creation, revenue models and functions, it is hard to argue against the proposition that the hyperscalers will always be able to innovate at the platform support level faster than connectivity providers can accomplish.


Telcos used to build their own consumer devices; their own switches (special purpose computers), their own operating systems, billing systems and so forth. They rarely do so anymore. 


As the architecture of computing changes, connectivity providers simply are changing with those evolutions.


Monday, December 5, 2022

"Blaming the Victim" When Surveys Don't Work as Expected

Some might call the effort people put into their survey responses as “satisficing.” As applied to survey response data, the term means some people are not thinking too much about the actual responses they are giving to the poll questions. That might be akin to "blaming the victim" of a crime for crime's commission.


Some of us might argue the term "satisficing" is quite misapplied. To the extent "satisficing" can be said to apply, most of it already has been applied in the design of the polls or surveys.


To be sure, the definition os “satisfice” is to “pursue the minimum satisfactory condition or outcome.” 


As used to describe survey respondent behavior, it connotes “choosing an alternative which is not the optimal solution but is a solution which is good enough.” 


source: FourweekMBA 


But that is precisely what multiple choice survey instruments require. As often stated, respondents are told to “pick the answer that most represents your views.” As most of us can attest, oftentimes none of the available options actually represents our “true” opinions. No matter. 


Also, unlike Simon’s search for understanding of decision making, he challenged the notion that human thinking actually could encompass all possible solutions. The whole point is that humans cannot do so, so a “good enough” solution always is chosen. 


In a multiple-choice survey instrument, the designers already have eliminated all but a set of choices. Respondents do not have to choose the “best possible” response, only that response presented to them, which is a handful of choices. 


The “satisficing” already has occurred, but it does not represent  respondent behavior: it represents all the simplifying decisions made by the designers of the survey instrument. 


One must indicate which answer “best” fits one’s views. The term “satisficing” was created in 1947 by Professor Herbert Simon  n his 1947 book Administrative Behavior


His argument was that humans cannot be fully rational when making decisions. So-called  rational choice theory, which asserts that this is how decisions are made, is unrealistic, Simon argued. 


Instead, what humans actually use is a process he called bounded rationality. What humans actually do, since they have limited data, limited time and limited capabilities, is seek a workable solution to a problem, not technically the “best possible” solution. 


The concept is that humans do not have unlimited time, resources or capability to rationally consider all possible solutions to any problem, and then choose the optimal solution. Given all the constraints, they search for a limited number of solutions that will work, that are “good enough” and proceed. 


As applied to survey design or survey response, bounded rationality--known as “satisficing”--already has been employed. Survey designers already have chosen a very-finite set of “solutions” or “answers” to problems, issues, attitudes or choices that might possibly be made in real life. 


Perhaps the real answer--from any respondent--is that they would choose “none of the above,” all of the time, for reasons they have no way to communicate to the survey design team. 


Perhaps it is understandable that survey instrument designers fault their respondents for providing “bad data.” Some of us would submit that is not the problem. The problem is the faulty architecture of thinking about the issues for which answers are sought; the explicit choices offered to respondents; the forcing of responses into a predetermined framework; using language not nuanced enough to capture actual choices, beliefs, preferences or possible actions. 


If the data does not fit one’s assumptions or existing beliefs, whose fault is that?


Sunday, December 4, 2022

Social Media Free Speech has no Legal Standing, Unfortunately

“Fixing” the issue of censorship on any social media will be quite difficult. 


The U.S. First Amendment to the Constitution only bars action by government entities. One might actually go further and note that the First Amendment only bars Congress from making laws that infringe on free speech.


The protections actually were not meant to constrict what private entities might publish or say. Nor, for that reason, is there a clear legal framework relating to private entities and First Amendment protections. Basically, they do not exist.


source: Teach Privacy 


All of which is a problem for those who believe freedom of speech should be enforced or enabled for social media platforms. 


That is part of a growing concern in some quarters about how freedom of expression is protected not from government action but by the actions of platforms. 


If we assume that the purpose of the First Amendment is to protect freedom of expression in a democratic society, then new media formats and new platforms can raise new issues. And, as is common, the matter is complicated, especially because the First Amendment protection of free speech rights only prohibits the government from infringing. 


Private entities may “infringe or restrict speech” all they like. And ordinary citizens--as opposed to media firms, for example--actually have circumscribed “rights of free speech.” You may not exercise that right anywhere, anytime, for example. You do not have the right to dictate what any media entity chooses to publish or restrict. You have no right to exist on a social platform. 


You certainly have no right “to be heard” on media. When First Amendment protections have been upheld by courts, they have virtually always upheld the rights of media to act as speakers without government censorship. 


There is no similar history of rulings supporting private actor speech or censorship, as the rights belong to the “speaker” who owns the asset. 


The First Amendment has generally been interpreted to protect the rights of “speakers. But the owners of new platforms (social media, in particular) say their users are the “speakers,” not the platforms, when arguing for protection against claims of defamation, for example. 


Even if jurists wished to extend some First Amendment protections beyond “government” entities, legal concepts would have to come to a decision on who the “speaker” is, to protect the speaker’s rights.


The problem is that precedent favors the view that the platform is protected from government censorship, but not the individual users of any platform from private censorship. It might seem arcane, but “defining who the speaker is” underpins the freedom from government censorship. 


But there is no established right of users of social media to be free from the platform’s censorship. Government may not infringe. No such limitation exists for private entities of any sort. Citizens have the right to create their own media and “speak” that way. They have no comparable standing as users of any private entity’s platform. 


In other words, are the users of a platform the speakers, the platform itself, or some combination. Worse, is it the speakers or the audience whose “free thought” rights are to be respected?


Traditionally, citizens are to be protected from government restriction of free speech. 


But the places where “speech” occurs also matter. Public forums--such as public parks and sidewalks--have always been viewed as places where citizens have the right of free speech. 


Nonpublic forums are places where the right of free speech can be limited. Examples are airport terminals, a public school’s internal mail system or polling places. 


In between are limited public forums, where similar restrictions on speech are lawful, especially when applied to classes of speakers. However, the government is still prohibited from engaging in viewpoint discrimination, assuming the class is allowed. 


The government may, for example, limit access to public school meeting rooms to school-related activities. The government may not, however, exclude speakers from a religious group simply because they intend to express religious views, so long as they are in a permitted class of users. 


Those protections have been limited to state action, It is government entities (local, state, or federal) that are enjoined from infringing the right of free speech. Protections have not been deemed applicable to private entities.


There has generally been in other words, no First Amendment right of free speech enforceable on private firms or persons, with some exceptions. 


What cannot be said? What ideas cannot be thought? What implications may not be drawn? What does intolerance look like, in the context of thinking and ideas?


It is not easy to explain how freedom of thought and content moderation are to be harmonized. What is the difference between “community standards” moderation and outright banning of thoughts and ideas?


It is not easy to understand how “ideas” are different from “actions.” 


Nor is it easy to explain where “free speech” rights exist and by whom those rights may be exercised, as simple as the notion of freedom of thought, speech and political views might seem. 


As U.S. Supreme Court Justice Oliver Wendell Holmes famously noted,  "if there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought: not free thought for those who agree with us but freedom for the thought that we hate." 


The problem is that the freedom of free thought and speech does not include the right to be heard on a social media platform. How that can be fixed necessarily includes defining the free speech rights of entities and users of entities, even when those rights clash. 

Why Low Market Share Can Doom an Access Provider or Data Center

There is a reason access providers with double the share of their closest competitor lead their markets. 


Profit margin almost always is related to market share or installed base, at least in part because scale advantages can be obtained. Most of us would intuitively suspect that higher share would be correlated with higher profits. 


That is true in the connectivity and data center markets as well. 

source: Harvard Business Review 


But researchers also argue that market share leads to market power that also makes leaders less susceptible to price predation from competitors. There also is an argument that the firms with largest shares also outperform because they have better management talent. PIMS researchers might argue that better management leads to outperformance. Others might argue the outperformance attracts better managers, or at least those perceived to be “better.”


Without a doubt, firms with larger market shares are able to vertically integrate to a greater degree. Apple, Google, Meta and AWS can create their own chipsets, build their own servers, run their own logistics networks. 

source: Slideserve 


The largest firms also have bargaining power over their suppliers. They also may be able to be more efficient with marketing processes and spending. Firms with large share can use mass media more effectively than firms with small share.


Firms with larger share can afford to build specialized sales forces for particular product lines or customers, where smaller firms are less able to do so. Firms with larger share also arguably benefit from brand awareness and preferences that lessen the need to advertise or market as heavily as lesser-known and smaller brands with less share. 


Firms with higher share arguably also are able to develop products with multiple positionings in the market, including premium products with higher sales prices and profit margins. 


source: Contextnet 


That noted, the association between higher share and higher profit is stronger in industries selling products purchased infrequently. The relationship between market share and profit is less strong for firms and industries selling frequently-purchased, lower-value, lower-priced products where the risk of buying alternate brands poses low risk. 


The relationships tend to hold in markets where firms are spending to gain share; where they are mostly focused on keeping share or where they are harvesting products that are late in their product life cycles. 

source: Harvard Business Review 


The adage that nobody gets fired for buying IBM” or Cisco or any other “safe” product in any industry is an example of that phenomenon for high-value, expensive and more mission-critical products. 


For grocery shoppers, house brands provide an example of what probably drives the lower relationship between share and profit for regularly-purchased items. Many such products are actually or nearly commodities where brand value helps, but does not necessarily ensure high profit margins. 


On the other hand, in industries with few buyers--such as national defense products--profit margin can be more compressed than in industries with highly-fragmented buyer bases. 


Studies such as the Profit Impact of Market Strategies (PIMS) have been looking at this for many decades. PIMS is a comprehensive, long-term study of the performance of strategic business units  in thousands of companies in all major industries. 


The PIMS project began at General Electric in the mid-1960s. It was continued at Harvard University in the early 1970s, then was taken over by the Strategic Planning Institute (SPI) in 1975. 


Over time, markets tend to consolidate, and they tend to consolidate because market share is related fairly directly to profitability. 


One rule of thumb some of us use is that the profits earned by a contestant with 40-percent market share is at least double that of a provider with 20-percent share.


And profits earned by a contestant with 20--percent share are at least double the profits of a contestant with 10-percent market share.


This chart shows that for connectivity service providers, market share and profit margin are related. Ignoring market entry issues, the firms with higher share have higher profit margin. Firms with the lowest share have the lowest margins. 

source: Techeconomy  


In facilities-based access markets, there is a reason a rule of thumb is that a contestant must achieve market share of no less than 20 percent to survive. Access is a capital-intensive business with high break-even requirements. 


At 20 percent share, a network is earning revenue from only one in five locations passed. Other competitors are getting the rest. At 40 percent share, a supplier has paying customers at four out of 10 locations passed by the network. 


That allows the high fixed costs to be borne by a vastly-larger number of customers. That, in turn, means significantly lower infrastructure cost per customer.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...