Sunday, February 19, 2023

Why Outcomes Always Lag Major Technology Investments

We might as well get used to the idea that artificial intelligence, machine learning, AR, VR, metaverse and Web3 are not going to produce the expected advantages as fast we invest in those technologies. 

The reason is simply that organizations cannot change as fast as does technology. So there will be an outcomes lag between investment and perceived outcomes.  

Martec's Law essentially argues that technology change happens faster than humans and organizations can change. That might explain why new technology sometimes takes decades to produce measurable change in organizational performance, or why a productivity gap exists.  

source: Chiefmartec 

Since there simply is no way organizations can change fast enough to keep up with technology, the practical task is simply to decide which specific technologies to embrace. In some instances, a major reset is possible, but typically only by a fairly-significant organizational change, such as spinning of parts of a business, selling or acquiring assets. 


source: Chiefmartec 

Some might argue that the Covid-19 epidemic caused an acceleration of technology adoption, though some also argue that demand was essentially “pulled forward” in time. In that sense, the pandemic was a “cataclysmic” event that caused a sudden burst of technology adoption. 

source: chiefmartec

The main point is that managerial discretion is involved. Since firms cannot deploy all the new technologies, choices have to be made about which to pursue. Even when the right choices are made, however, outcomes might take a while to surface. That likely is going to happen with AI investments, much as has happened in the past with other lags in measured productivity after major investment. 

We might reasonably expect similar disappointment with other major trends including metaverse, AR, VR or Web3. Organizations cannot change as fast as the technology does.

Wednesday, February 15, 2023

IP and Cloud Have Business Model Implications

Sometimes we fail to appreciate just how much business models in the connectivity and computing businesses are related to, driven by or enabled by the ways we implement technology. The best example is the development of “over the top” or “at the edge” business models. 


When the global connectivity industry decided to adopt TCP/IP as its next generation network, it also embraced a layered approach to computing and network functions, where functions of one sort are encapsulated. That means internal functions within a layer can be changed without requiring changes in all the other layers and functions. 


That has led to the reality that application layer functions are separated from the details of network transport or computing. In other words, Meta and Google do not need the permission of any internet service providers to connect its users and customers. 


Perhaps we might eventually come to see that the development of remote, cloud-based computing leads to similar connectivity business model changes. Practical examples already abound. E-commerce has disintermediated distributors, allowing buyers and sellers to conduct transactions directly. 


Eventually, we might come to see owners of digital infra as buyers and sellers in a cloud ecosystem enabled by marketplaces that displace older retail formats. 


Where one telco or one data center might have sold to customers within a specific geographic area, we might see more transactions conducted where customers and seller transactions are mediated by a retail platform, and not conducted directly between an infra supplier (computing cycles, storage, connectivity) and its customers or users. 


In any number of retail areas, this already happens routinely. People can buy lodging, vehicle rentals, other forms of transportation, clothing, professional services, grocery and other consumer necessities or discretionary items using online marketplaces that aggregate buyers and sellers. 


So one wonders whether this also could be replicated in the connectivity or computing businesses at a wider level. And as with IP and layers, such developments would be linked to or enabled by changes in technology. 


40 years ago, if asked to describe a mass market communications network, one would likely have talked about a network of class 4 voice switches at the core of the network, with class 5 switches for local distribution to customers. So the network was hierarchical. 


That structure also was closed, the authorized owners of such assets being restricted, by law, to a single provider in any geography. Also, all apps and devices used on such networks had to be approved by the sole operator in each area. 


source: Revesoft 


The point is that there was a correspondence between technology form and business possibility. 


If asked about data networks, which would have been X.25 at that time, one would have described a series of core network switches (packet switching exchanges) that offload to local data circuit terminating equipment (functionally equivalent to a router). Think of the PSE as the functional equivalent of the class 4 switches. 


source: Revesoft 


Again, there was a correspondence between technology and business models. Data networks could not yet be created “at the edge of the network” by enterprises themselves, as ownership of the core network switches was necessary. Frame relay, which succeeded X.25 essentially followed the same model, as did ATM. 


The adoption of IP changed all that. In the IP era, the network itself could be owned and operated by an enterprise at the edges, without the need to buy a turned-up service from a connectivity provider. That is a radically different model. 


At a device level one might say the backbone network now is a mesh network of routers that take the place of the former class 4 switches, with flatter rather than hierarchical relationships between the backbone network elements. 


source: Knoldus 


Internet access is required, but aside from that, entities can build wide area networks completely from the edges of the network. That of course affects WAN service provider revenues. 


These days, we tend to abstract almost everything: servers, switches, routers, optical transport and all the network command and control, applications software and data centers. So the issue is whether that degree of abstraction, plus relative ease of online marketing, sales, ordering,  fulfillment and settlement, creates some new business model opportunities. 


source: SiteValley 


As has been the case for other industries and products, online sales that disintermediate direct sales by distributors would seem to be an obvious new business model opportunity. To be sure, digital infra firms always have used bilateral wholesale deals and sales agents and other distributors. 


What they have not generally done is contribute inventory to third party platforms as a major sales channel. Logically, that should change. 


Saturday, February 11, 2023

Comcast to Cover 86% to 88% of Locations with 10 Gbps Symmetrical Home Broadband by 2025

Comcast now plans to deploy symmetrical 10-Gbps capabilities to 50 million homes and businesses by 2025. The significance is that Comcast’s networks pass perhaps 57 million home locations--perhaps 58 million--and perhaps 61 million home and business locations. 


So the 10-Gbps upgrade encompasses perhaps 86 percent to 88 percent of locations. 


So Comcast  is going to be a daunting foe for would-be fiber-to-home providers to compete with, when the general thinking is that the first FTTH network in a market has clear advantages.


But that is looking at physical media, not the reality of network capabilities. If Comcast gets there first with symmetrical 10-Gbps services, it will be hard for FTTH providers to catch up, especially as Comcast adds more fiber-accessed locations to serve business users.


I don’t know about you, but I really do not care what the access media is, if I can get symmetrical 10-Gbps service.

Wednesday, February 8, 2023

Fiber Capex Contrasts at Lumen

The fourth quarter 2022 Lumen Technologies earnings call was in some ways a study in infrastructure contrasts and an indication that further restructuring could happen. 


Lumen is adding about six million intercity fiber miles of capacity by 2026. That supports the part of Lumen’s business built largely around the intercity capacity business in the United States, and global capacity in the northern hemisphere. 


Contrast that with what happened to the fiber-to-home program. “As we've said previously, we hit the pause button during the fourth quarter,” said Kate Johnson, Lumen CEO.  “Now, to be frank, it was more of a stop button than a pause.”


“A natural outcome of our assessment of Quantum is a more focused build target,” said Johnson. “We believe the overall Quantum enablement opportunity is eight million to 10 million locations.”


For Lumen, that suggests up to half the homes in its service territory are the best chances to monetize fiber-to-home investments. Lumen has an estimated 21 million to 24 residential and small business locations passed by its networks in 16 states. 


The latest statements suggest Lumen believes between 38 percent and 43 percent of mass market locations are suitable for FTTH investment over the next half decade or so. 


The issue for Lumen, as was the case for the former US West--which has had the least-dense footprint of all the former Baby Bells--is what to do about the rest of the customer base, assuming copper access is not a long term solution.


Divesting rural assets already has been the answer, as Lumen sold off access assets in 20 states. That raises the theoretical possibility that Lumen sells still more of its rural assets over time, as about 60 percent of its local access locations are deemed insufficiently profitable to serve with FTTH facilities at the moment. 


Keep in mind that 79 percent of Lumen’s revenue is earned serving large and mid-sized business customers. Most of that revenue comes from the intercity network and local connections and services to customers in the larger urban markets. 


Much small business revenue is counted in mass markets, where, increasingly, revenue is anchored in fiber-based internet access (home broadband) of about $60 a month. 

source: Lumen Technologies 


FTTH investments rarely offer a “no brainer” business case. In Lumen’s case, the issue will be what to do about the 60 percent of mass market locations that do not seem amenable.


Tuesday, February 7, 2023

The Zero Touch, On Demand Telco, and Beyond


PCCW Global's Console Connect is in many ways a bit hard to describe. You might think of it as an automated platform for transactions that today might include two or more enterprises, two or more data centers or entities using applications and application providers. 

You might call it an example of a platform business model, or a two-sided business model. You might think of it as a way of allowing entities to order up resources--bandwidth, compute cycles or apps--on demand. 

You might call it "internet on demand" for enterprises. Some might note that Console Connect is an example of a use case for distributed ledger (blockchain). Some might think of it as "infrastructure as a service." 

It is all of those things. And more, actually. It is a distribution channel for local access providers, data centers and app providers. It is an example of network effects, or communities of interest, not just a sales channel for connectivity, application or data center value propositions. 

It is more than that. 

Monday, February 6, 2023

Will ChatGPT-enhanced Bing be Able to Supply Something Like Footnotes? It Might Matter

ChatGPT as used by Bing might be able to cite sources used to generate the text, a potentially important detail for those hoping to use the generative text function in contexts where sources matter. 


Perhaps the search bar also is replaced by a chat box. 


source: Medium 


Some believe Bing’s version of ChatGPT, possibly branded as the new Bing,  also will be able to use data collected after 2021, which ChatGPT so far has not been able to use. 


Apparently, users will be able to use either traditional search or the AI-assisted version side by side, toggling between them. This will be a useful feature for people testing the accuracy and usefulness of generated results. 


The new Bing reportedly will also use a version of footnotes, showing the sources it used to generate a response to a query. 


That feature will be necessary--or helpful--for people whose work, or use of search, requires identifying sources (journalists, researchers, academics, students in many cases, writers). 


If the reports are correct, ChatGPT just got better, and more useful.


Saturday, February 4, 2023

Google Gets Ready to Launch its Own Generative Text Engine

Google is getting ready to release its own competitor to Chat GPT. There are many possible ramifications. Small startups now will face firms far better capitalized, with more scale and developed ecosystems of developers, users, customers, roles and business partners. 


As always is the case, that means many smaller firms will be acquired. Some will simply disappear. Since all generative language engines analyze lots of text, Google should have advantages. It indexes an awful lot of text. The adage that artificial intelligence models benefit from huge datasets is apt. 


Other questions are harder to answer and assess. In principle, generative language engines should find common use as an alternative to use of search engines. So it is not surprising that Google and other search engines will try and marry generative text capabilities to their existing search platforms. 


Just how far the capability might spread in various industry verticals likewise is hard to assess, sometimes because there are legal issues to using the generated text. In other cases, such as health care, the generated text can help point users to other text that can be used in care, without creating ethical or other liabilities. 


We might find that is the pattern for many other industries: generated text might be useful for high-level backgrounding but not useful as a tactical guide to problem identification or solutions. 


On the other hand, in contained use cases, such as customer service, AI-generated text might be quite useful as a substitute for human action. Answering generic questions for which there are structured answers seems easy. Generative text becomes less reliable as the range of possible answers, in context, is required. 


If you have used engines such as ChatGPT, you already know the current state of the art is that high-level summaries can be quite useful. But detailed, in-context tactical information is another matter. 


The other observation is that artificial intelligence now is more rapidly entering mainstream use after many decades of gestation. Useful precisely for allowing conversational answers to queries, generative text engines should quickly be useful for queries that have structured answers. 


But many questions that must include many value assumptions will always have multiple answers, depending on context and point of view. In such cases we might find generative text of some value, but without the context on training models or degree of dispute about “facts", we might not be able to “trust” the generated text. 


Will AI Fuel a Huge "Services into Products" Shift?

As content streaming has disrupted music, is disrupting video and television, so might AI potentially disrupt industry leaders ranging from ...