Sunday, October 23, 2022

Verizon Home Broadband Share "In Region" Has Not Moved Much

If Verizon now has seven million fiber access accounts, what does that imply about household penetration rates? It is not so easy to say. For starters, Fios accounts serve small business and larger business accounts, not just homes. 


Verizon homes passed might number 18.6 to 20 million. We must estimate as Verizon never seems to publish a “homes passed” figure. At seven million accounts, Fios would represent 35 percent to 38 percent of homes passed. That seems in line with Verizon’s past reporting, but it is not clear whether business accounts are included in those figures. 


My guess is that business revenue--and therefore the accounts and lines--are reported elsewhere. Given the amount of time Fios has been available, that penetration rate testifies to the amount of competition in the home broadband market, where, by and large, it is cable operators who have 60 percent or higher levels of the installed base, and may have higher market share rates (net new accounts added), at least for most of the past two decades. 


Most other incumbent AT&T executives have speculated that they might ultimately get about half the installed base of accounts. 


Long term, MoffettNathanson sees cable having a 50 percent broadband market share in markets in which they compete with fiber-to-home facilities. That implies a shift of 20 percent of the installed base from current levels: telcos gain 10 points while cable operators lose 10 points of share. 


Not all observers agree with that analysis. S&P Global Market Intelligence, for example, does not expect stepped-up telco FTTH investment to change share statistics very much, in the near term. 


But S&P Global Market Intelligence does believe new competition from mobility suppliers using fixed wireless (T-Mobile, for example) will gain about six percent share of the U.S. residential broadband market with about 7.19 million subscribers. 


It is not yet clear how much of that share gain will be claimed by upstarts in the home broadband market such as T-Mobile, and how much will be gotten by fixed wireless operations conducted by incumbents such as Verizon. 


S&P Global Market Intelligence also estimates there will be about 1.52 million satellite customers by the end of 2021, accounting for just one percent of the installed base of home broadband accounts. 


Many observers expect telcos and independent providers  to gain share. The only issue is how much and how long it takes. Historically, most telcos have found their installed base share tops out at about 40 percent of homes passed.


Are Meetings and Messages Really "Work" or "Outcomes?"

There is some evidence that workers and their managers do not agree on the impact of remote work on productivity. A Microsoft survey of 20,000 people in 11 countries found 87 percent of employees reporting they are more productive remotely or with a mix of in-office and remote work. 


But 85 percent of leaders say hybrid work makes it difficult to determine if their workers are being productive.


We should be clear that nobody has yet developed an accepted and trustworthy way of measuring knowledge worker or office worker productivity. 


So the debate about the productivity of remote work will likely never be fully settled, in large part because it is so difficult, perhaps impossible, to measure knowledge worker or office worker productivity. 


Whether knowledge worker productivity is up, down or flat is almost impossible to say, despite claims one way or the other. Much of the debate rests on subjective reports by remote workers themselves, not “more-objective” measures, assuming one could devise such measures. 


Pearson's Law and the Hawthorne Effect  illustrate the concept that people who know they are being measured will perform better than they would if they are not being measured. 


Pearson's Law states that “when performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.” In other words, productivity metrics improve when people know they are being measured, and even more when people know the results are reported to managers. 


Performance feedback is similar to the Hawthorne Effect. Increased attention from experimenters tends to boost performance. In the short term, that could lead to an improvement in productivity.


In other words, “what you measure will improve,” at least in the short term. It is impossible to know whether productivity--assuming you can measure it--actually will remain better over time, once the near term tests subside. 


So we are only dealing with opinions: whether those of workers or their managers. People might think they are productive, when they are not. Managers might believe some set of tools allows them to measure such productivity, when they actually cannot. 


Let us be truthful: people like working remotely for all sorts of reasons that have nothing to do with “productivity” in a direct sense. Managers distrust such work modes in part because they lose what they believe to be impressionistic measures of input. 


But input does not matter. Output matters. And “output” is difficult to measure in any knowledge or office work, in ways that correlate well with organizational outputs. 


And by some quantitative measures, remote work might be reducing productivity, using either time spent on communication--whether by messaging or in meetings. Some people act as though sending messages constitutes work, when it might be, at best, a way to coordinate work. 


Some seem to believe that meetings are work, when most meetings are efforts to coordinate work. “Work” happens outside chats, messages or meetings. So if time in meetings or spent communicating increases, that increases the “time spent” denominator for purposes of determining outcomes numerators. 


In other words, if remote work creates a need for vastly-more communication about work, then the effort to produce outcomes from work almost certainly increases, compared to output. Hence, lower productivity.


“Since February 2020, the average Teams user saw a 252 percent increase in their weekly meeting time and the number of weekly meetings has increased 153 percent,” says Microsoft.


The average Teams user sent 32 percent more chats each week in February 2022 compared to March 2020 and that figure continues to climb. Workday span for the average Teams user has increased more than 13 percent (46 minutes) since March 2020, and after-hours and weekend work has grown even more quickly, at 28 percent and 14 percent, respectively.

source: Microsoft 


It might only be one indicator, but vastly-increased time spent on coordination might be a sign that remote work is not as “productive” as remote workers believe, unless the amount of work time increases to compensate. And even there, increasing the denominator, compared to the numerator, always leads to a smaller result: arguably lower productivity.


Friday, October 21, 2022

South Korea Ponders More Fees on Hyperscale App Providers

Though South Korean internet service providers already charge fees on a few hyperscale app providers, the South Korean government is considering new and heavier fees aimed at  a few hyperscale app providers pay for access to South Korean internet access networks, Reuters reports


EU regulators and U.S. regulatory officials also are looking at levying such new taxes. Ignore for the moment the obvious winners and potential losers if such policies are adopted. Ignore the industrial policy implications. Ignore the changes to interconnection policies and practices that might also occur. Ignore the complete overturning of network neutrality rules and principles. 


Consider only the issue of who should pay for universal service. Traditionally, customers have paid such fees. The new potential thinking on who should pay for the construction of internet access networks also changes how we think about “who” should pay for universal service. 


Ignore for the moment the wisdom of shifting support burdens from customers of a service to others. Also ignore the potential implications for content freedom or higher potential costs for users of content services. 


For the first time, both European Union and U.S. regulatory officials are considering whether  universal service should be supported by a combination of user and customer fees. The charges would be indirect, rather than direct, in several ways. 


In the past, fees to support access networks in high-cost areas were always based on profits from customers. To be sure, high profits from business services and international long distance voice calls have been the support mechanism. In more recent days, as revenue from that source has kept dropping, support mechanisms have shifted in some markets to flat-fee “per connection” fees. 


But that already seems not to be generating sufficient funds, either, at least in the U.S. market. So in what can only be called a major shift, some regulators are looking at levying fees on some users, who are not actually “customers.” 


Specifically, regulators are looking at fees imposed on a few hyperscale app providers, using the logic that they represent a majority of internet traffic demands on access network providers. Nobody has done so, yet, but the same logic might also be applied to wide area network transport


Nobody can be quite sure what new policies might be adopted by hyperscale app providers subjected to such rules. Nobody knows whether virtual private networks might be a way some content providers seek to evade such rules. 


Nobody knows whether users of content, app and transaction networks, advertisers, retail merchants or others  will bear the actual burden of the new costs, but that is entirely likely: somebody other than the hyperscale app providers will wind up paying. 


Many will argue such rules are “fair.” Whether they are, or are not, is debatable. That users of some popular apps, advertisers, retailers or others will find their costs rising is not very contestable.


Ofcom to End Net Neutrality

Ofcom, the U.K. communications regulator, is preparing for a big u-turn on network neutrality, as are regulators in the European Union region and possibly elsewhere. Having concluded that protecting local internet service providers actually is a bigger problem than any supposed anti-competitive behavior on the part of ISPs, regulators now are planning an about face on those rules. 


Some of us might question whether the rules actually addressed a real problem in the first place. Since at least 2014, many observers, and even EU regulators, seemd to sense problems.  


source:: Prosek 


Network neutrality rules, as you recall, were supposed to “protect” app providers from anti-competitive behavior of internet service providers. 


In some markets, such as South Korea and the EU, it now appears regulators are more concerned about protecting local ISPs from a few hyperscale app providers. And that will require overturning network neutrality. 


The proposed new U.K. rules would allow ISPs to offer quality of service features banned by network neutrality rules, such as latency or bandwidth guarantees, traffic-shaping measures to avoid congestion and zero rating of access to some apps. 


Some of us always had issues with consumer network neutrality for precisely the reason that it prevented ISPs from developing differentiated offers that consumers might actually prefer. 


Quality of service for IP voice and videoconferencing apps were among the clearest examples. But gaming app or service also is an area where latency performance might be beneficial. 


If net neutrality goes away, good riddance. It was a solution for a problem that did not exist and prevented innovation in the consumer home broadband business.


AT&T Might Join Ranks of Fiber Joint Venture Firms

Fiber to home cost and time to market, plus the firm’s continuing need to deleverage (reduce debt) seem to have convinced AT&T it is time to take on a joint venture partner to finance new infrastructure. 


And AT&T’s upgrade requirements are fairly daunting. Of roughly 57 million U.S. homes passed, only a bit more than a quarter of locations have been upgraded to fiber access. That means potentially 40 million locations that conceivably could be rebuilt using optical fiber access. 


At roughly $800 just for the network to pass those locations, the capital investment could be close to $33 billion. Additional capital would have to be invested to activate customer locations. At 40 percent take rates, that might imply connecting 16 million locations.


At $600 per customer location, that implies an additional investment of perhaps $9.6 billion. Altogether, AT&T might have to invest about $42.6 billion to activate fiber access for the 40 percent of potential customers presently served by copper-connected home broadband facilities. 


But AT&T does not presently view all those homes passed as candidates for upgrades. 


AT&T’s decision to move into new markets hinges on at least three factors, Chief Executive Officer John Stankey has said. The area has to be undeserved with broadband and be profitable for the company. AT&T also has to be the first provider of  fiber to the home.

Thursday, October 20, 2022

Can VR/AR or Metaverse Wait 2 Decades for the Compute/Connectivity Platform to be Built?

The Telecom Infra Project has formed a group to look at metaverse-ready networks. Whether one accepts the notion of “metaverse” or not, virtually everyone agrees that future experiences will include use of extended, augmented or virtual reality on a wider scale. 


And that is certain to affect both computing and connectivity platforms, in the same way that entertainment video and gaming have shaped network performance demands, in terms of latency performance and capacity. 


The metaverse or just AR and VR will deliver immersive experiences that will require better network performance, for both fixed and mobile networks, TIP says. 


And therein lie many questions. If we assume both ultra-high data bandwidth and ultra-low latency for the most-stringent applications, both “computing” and “connectivity” platforms will be adjusted in some ways. 


Present thinking includes more use of edge computing and probably quality-assured bandwidth in some form. But it is not simply a matter of “what” will be required but also “when” resources will be required, and “where?”


As always, any set of performance requirements might be satisfied in a number of ways. What blend of local versus remote computing will work? And how “local” is good enough? What mix of local distribution (Wi-Fi, bluetooth, 5G and other) is feasible? When can--or should--remote resources be invoked? 


And can all that be done relying on Moore’s Law rates of improvement, Edholm’s Law of access bandwidth improvement or Nielsen’s Law of internet access speed? If we must create improvements at faster rates than simply relying on historic rates of improvement, where are the levers to pull?


The issue really is timing. Left to its own internal logic, the headline speed services in most countries will be terabits per second by perhaps 2050. The problem for metaverse or VR experience providers is that they might not be able to wait that long. 


That means the top-end home broadband speed could be 85 Gbps to 100 Gbps by about 2030. 

source: NCTA  


But most consumers will not be buying service at such rates. Perhaps fewer than 10 percent will do so. So what could developers expect as a baseline? 10 Gbps? Or 40 Gbps? And is that sufficient, all other things considered? 


And is access bandwidth the real hurdle? Intel argues that metaverse will require computing resources 1,000 times better than today. Can Moore’s Law rates of improvement supply that degree of improvement? Sure, given enough time. 


As a rough estimate, vastly-improved platforms--beyond the Nielsen’s Law rates of improvement--might be needed within a decade to support widespread use of VR/AR or metaverse use cases, however one wishes to frame the matter. 


Though the average or typical consumer does not buy the “fastest possible” tier of service, the steady growth of headline tier speed since the time of dial-up access is quite linear. 


And the growth trend--50 percent per year speed increases--known as Nielsen’s Law--has operated since the days of dial-up internet access.


The simple question is “if the metaverse requires 1,000 times more computing power than we generally use at present, how do we get there within a decade? Given enough time, the normal increases in computational power and access bandwidth would get us there, of course.


But metaverse or extensive AR and VR might require that the digital infrastructure  foundation already be in place, before apps and environments can be created. 


What that will entail depends on how fast the new infrastructure has to be built. If we are able to upgrade infrastructure roughly on the past timetable, we would expect to see a 1,000-fold improvement in computation support perhaps every couple of decades. 


That assumes we have pulled a number of levers beyond expected advances in processor power, processor architectures and declines in cost per unit of cycle. Network architectures and appliances also have to change. Quite often, so do applications and end user demand. 


The mobile business, for example, has taken about three decades to achieve 1,000 times change in data speeds, for example. We can assume raw compute changes faster, but even then, based strictly on Moore’s Law rates of improvement in computing power alone, it might still require two decades to achieve a 1,000 times change. 


source: Springer 


And that all assumes underlying demand driving the pace of innovation. 


For digital infrastructure, a 1,000-fold increase in supplied computing capability might well require any number of changes. Chip density probably has to change in different ways. More use of application-specific processors seems likely. 


A revamping of cloud computing architecture towards the edge, to minimize latency, is almost certainly required. 


Rack density likely must change as well, as it is hard to envision a 1,000-fold increase in rack real estate over the next couple of decades. Nor does it seem likely that cooling and power requirements can simply scale linearly by 1,000 times. 


So the timing of capital investment in excess of current requirements is really the issue. How soon? How Much? What Type?


The issue is how and when to accelerate rates of improvement? Can widespread use of AR/VR or metaverse happen if we must wait two decades for the platform to be built?

Wednesday, October 19, 2022

Correlation is Not Causation

There are two diametrically opposed ways of explaining this data correlating fixed network download speeds with gross domestic product per person (adjusted using the purchasing power parity method). We can see that speeds and GDP are positively correlated. 

source: Ookla 


Higher GDP per person correlates with higher downstream home broadband speeds. What we cannot assert is that higher speeds “cause” higher per-capita GDP. We only know that speed and per-capita GDP are correlated. 


Policy advocates always argue that higher investment in home broadband capability leads to higher economic growth and therefore higher GDP. They might point to such data as evidence.


The reverse hypothesis might also be advanced: higher per-capita GDP leads to more ability to pay, and therefore higher-quality home broadband. In other words, higher economic growth and per-capita GDP leads to stronger demand for quality broadband. 


Higher income is correlated with internet adoption rates, for example. Higher adoption rates also are correlated with age and educational attainment. But we cannot argue that broadband “causes” that higher educational attainment, anymore than broadband causes age. 

source: Phoenix Center for Advanced Legal & Economic Public Policy Studies 


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...