Friday, September 2, 2022

No Productivity Boom from New IT or Remote Work?

It seems incontestable that knowledge and office workers prefer remote working. Whether such work results in higher, lower or no change in productivity is among the issues, though. 


Some economists say there was no post-Covid productivity increase, despite the information technology investment boom that happened during the pandemic lockdown. That is not surprising. 


There long has been a lag--sometimes lasting a decade--before big IT investments show up as correlated with higher productivity. It can be argued that the places new IT was deployed are not likely to grow productivity. 


Supply chain investments might or might not contribute to productivity, even if they improve resilience. Investments in security, remote work, personal computers and so forth likewise might or might not contribute to any measurable lift in productivity, even over a five-year time frame. In fact, to the extent such investment were necessary simply to allow work to continue, and might actually be duplicate investment of sorts (people already had computers and broadband at work), they might reduce output, compared to input. 


Work-life balance arguably is better. But is that necessarily good for productivity? It is terribly hard to say. 


Outcomes also often are based on team output, not individual output. In such cases it is team productivity that matters. And such output often is intangible. How do you properly measure an intangible?  


To be sure, such measurements always are difficult, whether we are looking specifically at post-Covid or “during Covid” time periods or non-Covid times. 


Where it comes to knowledge or office workers, some might say the task is nearly impossible. Indeed, observers often note the difficulty before proceeding to argue such measurements can be made. 


Whether measurements actually can be made remains debatable. The point is that all our discussions about the productivity of remote work are opinions, not facts. We mostly cannot measure knowledge worker or office worker productivity, especially non-tangible outputs, whether remote or local.

Wednesday, August 31, 2022

Anna Karenina and Successful Startups

If you have ever worked at a startup that failed, the “Anna Karenina principle” will make sense to you. If success requires solving a number of key problems, then failure to solve just a single one of those problems produces failure. My personal experience is that failure happens as much as 70 percent of the time and perhaps closer to 100 percent over a decade’s time, if one considers acquisition to be an outcome or failure to scale outcomes that represent failure. 


To put it another way, failure goes through a wide gate; success through a very-narrow gate. 


source: slideshare 


The way I have come to understand that principle is an analogy to high walls, locked doors or hurdles. Success requires scaling a series of high walls; opening a series of locked doors or jumping over a series of hurdles. And each obstacle must be successfully navigated. Failure at any single obstacle kills the venture. 


source: Inc. 


Those obstacles can include failure to get the next round of funding; making a bad key hire; betting on the wrong product; launching too early or too late. Trying to solve the wrong problem; burning out the founding team; failure to manage remote teams; making the wrong pivot; lack of domain knowledge; team discord or lack of focus also are gates, hurdles or locked doors. 


Not “listening to customers” is a gate. But sometimes one has to ignore feedback as well. Poor marketing; lack of a viable revenue model; user interface problems; the wrong pricing model; a “too high” cost model; more-nimble competitors; having the wrong team; running out of cash before you are ready to go to market or misjudging the existence of a market also can be key hurdles, walls or gates. 


The point is that success requires success at every single obstacle. Failure requires only one obstacle not surmounted.


Tuesday, August 30, 2022

How Big is the "Value" Segment of U.S. Home Broadband Market?

Home broadband for $25 a month is the value proposition Verizon fixed wireless now offers for top-end customers of its mobility service. For T-Mobile fixed wireless customers on premium multi-user plans,  the recurring cost is $30 a month. 


Say what you will about the expected speeds of such services, or the cost of higher-speed services from either cable or fiber-to-home service providers. 


For a possibly-substantial portion of the market, such price points are going to be attractive, even if the trade off is lower top-end speeds. 


It might be the case that “good enough” service is worth a “reasonable price” for that service. 


That is important for home broadband market competitors. Even if such offers do not appeal to the entire market, the “good enough service for a reasonable price” segment of the market could be substantial, especially for Verizon and T-Mobile mobility service customers. 


That is similar to the “same service, lower price” positioning often used by attackers in established markets. If the top possible speed for fixed wireless sold by Verizon is about 300 Mbps (millimeter wave assets help), then Verizon theoretically could reach between a third and 45 percent of U.S.home broadband buyers, based on data from Openvault. 


T-Mobile speeds for home broadband are said to range up to about 182 Mbps, suggesting a third or so of U.S. home broadband accounts could be addressable. 

  

It is too early to say whether fixed wireless platforms will be long-lasting drivers of market share in internet access markets, or only relatively temporary. Some believe speed limitations will ultimately reduce fixed wireless attractiveness. Others think fixed wireless capacity can keep growing. 


But at least for the moment, it is hard to ignore U.S. cable operator lost market share and the availability of fixed wireless from Verizon and T-Mobile. In the near term, fixed wireless market share gains seem a certainty. 


Comcast continues to claim that fixed wireless is not damaging its home broadband business, and that might well be correct. For any ISP, a customer move is an opportunity to gain or add an account, so lower rates of dwelling change should logically reduce the chances of adding new accounts. 


In the second quarter of 2022, Comcast reported a net loss of customer relationships and “flat” home broadband accounts. 


That might suggest to some observers that stepped-up telco fiber-to-home and fixed wireless account gains might be starting to change market share dynamics. Those trends possibly were not obvious in the first quarter of 2022. 


All that said, there are possible signs of change. Fixed wireless already is driving net home broadband additions for T-Mobile. On its second quarter earnings report, T-Mobile added more than half a million net new home broadband accounts, which might put it on track to be the biggest net gainer for the third quarter in a row. 


In the fourth quarter of 2021, fixed wireless represented 74 percent of Verizon net home broadband additions.  


Comcast did not gain net accounts for the first time, ever, according to market watchers. Verizon added significant numbers of new home broadband accounts in the same quarter.  


The longer term  issue is demand as typical data consumption keeps growing, and “typical speeds” likewise keep climbing. 


Perhaps use of millimeter wave assets and better radio technologies will solve much of that problem for fixed wireless operators. Perhaps new wholesale arrangements will develop. 


What might also be happening is that consumer appetite for “more affordable” internet access is substantial. Many households might be willing to trade “speed” for “lower price.” In other words, as with any product, value is a combination of features and price. Fixed wireless might show the existence of a market segment that cares about “reasonable speed for a reasonable price” more than “fast” levels of service. 


That is not the whole market, but it is potentially a big enough segment to shift billions of dollars of home broadband revenue and significant market share. 


Users Will Not Care about Web3 or Web 3.0

Is semantic web, or web 3.0, not the same thing as web3? Apparently not, some argue. But others will argue the terms will ultimately be interchangeable, even if purists will continue to argue the two concepts are quite different.  


Web 3.0 might be defined as a standards effort related to making the reading of machine data universal. That is akin to saying that the “internet” is different from TCP/IP or Ethernet. 


Web3, on the other hand, refers to a decentralized internet based on blockchain. 


Others will argue the semantic web (web 3.0) focuses on efficiency and intelligence by reusing and linking data across websites, while web3 focuses on decentralization, security and end user ownership of data.


As a practical matter, users will not care very much, any more than they care about how data is transported, where it is stored, how their phones or cars work. 


The nomenclature will eventually settle down. “Semantic” is not likely to gain widespread universal usage. For that matter, “web 2.0” never really became popular currency, either. And so it is possible that neither web3 nor web 3.0  ever really become mainstream user terms. 

source: mdpi.com, cointelegraph 


Far more likely is the common understanding that people use “phones” or “computers” or “cars.” So people will simply say they “use the internet.” But as has been the case in the past, applications, features and capabilities will continue to develop, for devices, apps and platforms. 


Most people will still not care how far towards a “next generation internet” we have gotten. They will simply be able to do and experience new things, in new ways, as we have found in the past. 


We have moved from a character-based internet to a visual internet; from “read only” to “read-write;” from content to commerce. Metaverse use cases will eventually develop. But it is possible that much of the shift to blockchain-based security will happen in the background. 


Regular people will not care much about decentralization or machine-readable data, if they care at all. People do not know how electricity works, how cars are made or what software powers their devices, to use them every day. 


Monday, August 29, 2022

Telco Choice of TCP/IP for Next Generation Networks was Fateful

Technology decisions, it goes without saying, can dramatically change access provider, data center and application business models. Cloud computing creates the business opportunity for data centers and “software as a service.”


Fatefully, connectivity provider adoption of TCP/IP upended the “closed” business model and substituted an “open” model that also relegates access providers to “connectivity” roles. If telcos did not want to become “dumb pipes,” they should not have adopted TCP/IP as their “next generation network” platform. 


TCP/IP was the computing network model, where value was created on top of layered and disaggregated transport and access. Modern computing necessarily operates on an “over the top” basis, with hardware and operating systems disaggregated from apps that use that hardware and operating systems. 


To be sure, there were strong arguments in favor of TCP/IP and Ethernet: connectors were low cost, compared to proprietary telco connectors. Data networks using TCP/IP and Ethernet were relatively simple. The ecosystem was well developed and disaggregated, presumably leading to faster and easier innovation. 


As all communications networks began to operate as “data” networks, that made sense. 


None of that necessarily requires retail pricing of data connections on a flat fee, unlimited usage model. 


As access service providers complain about capacity costs and seek new revenues from hyperscale app providers, other common solutions exist, even if hard to implement under highly-competitive conditions. 


Obviously, selling more customers unlimited usage plans, even at higher recurring prices, does not encourage customers to be careful about their consumption. 


By choosing TCP/IP as its next-generation network, the global “telecom” industry chose to operate data networks based on the use of loosely-coupled layers. But it is not a given that flat fees and virtually unlimited usage is a mandatory pricing model. 


Arguably much of the ISP concern about business models grows directly from decisions made to operate retail access networks with a flat fee/virtually unlimited usage model. 


Flat Rate Pricing is the Foundation of ISP Access Capex

“All you can eat” retail pricing policies (flat rates for unlimited usage) are a major reason connectivity provider business models are so challenged. On the other hand, wholesale transit prices--which are metered--also keep dropping, on a cost per bit basis. 


That is a major pricing model change from the economics of the voice era, when most usage was on a metered basis: use more, pay more. 


Internet service providers might complain that they “cannot” charge by usage, for any number of reasons. Still, that is a business choice that brings with it the perpetual need to upgrade facilities while retail revenue remains relatively flat. 


So, in a real sense, internet access providers are faced with capital investment choices partly of their own making; partly imposed by competitive market conditions and partly driven by a major shift of entertainment video preferences by consumers. 


By 2020, web and file sharing accounted for only about 14 percent of global IP traffic, for example. Some 82 percent of total traffic was entertainment video traffic. 


source: Cisco, Vox, Recode 


It might be largely uncontrollable that video entertainment has moved from “broadcast” (“multicast”) and “packaged media” delivery to unicast and network delivery. But that alone has huge capital investment and pricing implications.


Unicast and two-way networks cost more than multicast or broadcast networks. The reason is the ability to deliver one copy of something to millions of consumers all at the same time. Uncast now essentially means delivering millions of copies at different times. 


If one copy of an item requires X bandwidth, then a multicast delivery requires X bandwidth. Unicast delivery of that same object to one million viewers requires one million X bandwidth. 


So the shift of video consumption to unicast drives huge changes in network investment. Perhaps nobody envisioned that the largely character-based early internet would evolve into the multimedia platform it now has become, with former multicast traffic increasingly being delivered in unicast formats. 


That alone would require ever-increasing access bandwidth as unicast video consumption (streaming and downloading) increases. 


In other words, flat rate pricing for virtually unlimited usage does not match the requirements of delivering entertainment video. 


It might be quite difficult for any single ISP to use pricing that matches consumption with cost. But flat rate pricing for virtually unlimited usage  is the primary driver of ever-growing access provider capital investment.


Thursday, August 25, 2022

Intel Gets Brookfield Investment for Chip Fabrication Expansion

Chip fabs are the latest form of digital infrastructure investment. Intel Corp and Canada's Brookfield Asset Management agreed to jointly fund up to $30 billion for Intel’s chip factories in Arizona.


As frequently is the driver for other co-funding ventures in the digital infrastructure space, Intel will limit its own capital investment in return for giving up part of the future revenue from the investment.


Brookfield  will invest up to $15 billion for a 49-percent stake in the project.  Intel will retain majority ownership and operating control of the two chip factories.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...