Wednesday, July 14, 2021

Why All Forecasts are Sigmoid Curves

STL Partners’ forecast for Open Radio Access Network investments--whether one agrees with the projections or not--does illustrate one principle: adoption of successful new technologies or products tends to follow theS curve growth model.


The S curve  has proven to be among the most-significant analytical concepts I have encountered over the years. It describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


 I’ve seen Gompertz used to describe the adoption of internet access, fiber to the home or mobile phone usage. It is often used in economic modeling and management consulting as well.


Source: STL Partners


The following  graph illustrates the normal S curve curve of consumer or business adoption of virtually any successful product, as well as the need to create the next generation of product before the legacy product reaches its peak and then begins its decline. 


The graph shows the maturation of older mobile generations (2G, 3G) in red, with adoption of 4G in blue. What one sees is the maturing products are the top of the S curve (maturation and decline) while 4G represents the lower part of the S curve, when a product is gaining traction. 


The curves show that 4G is created and then is commercialized before 3G reaches its peak, and then declines, as the new product displaces demand for the old. 

source: GSA


Another key principle is that, successive S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example.


The S curve describes the way new technologies are adopted. It is related to the product life cycle. Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The point is that the next big thing will turn out to be an idea first broached decades ago, even if it has not been possible to commercialize that idea. 


The even-bigger idea is that all firms and industries must work to create the next generation of products before the existing products reach saturation. That is why work already has begun on 6G, even as 5G is just being commercialized. Generally, the next-generation mobile network is introduced every decade. 


source: Innospective


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption  inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. 

source 


It is worth noting that not every innovation succeeds. Perhaps most innovations and products aimed at consumers fail, in which case there is no S curve, only a decline curve. 


source: Thoughtworks 


The consumer product adoption curve and the S curve also are related to the point at which early adopters are buyers, but before the mass market adoption starts. 


source: Advisor Perspectives 


Also, keep in mind that S curves apply only to successful innovations. Most new products simply fail. In such cases there is no S curve.  The “bathtub curve” was developed to illustrate failure rates of equipment, but it applies to new product adoption as well. Only successful products make it to “userful life” (the ascending part of the S curve) and then “wearout” (the maturing top of the S curve before decline occurs). 


Tuesday, July 13, 2021

State of the Internet


Lots of stats. 

Monday, July 12, 2021

What Drives Telco Growth?

Connectivity provider revenue growth mostly comes from acquisitions and mergers, despite some thinking that organic growth is the key. 


As fast as T-Mobile has been growing in the U.S. market, its merger with Sprint has driven the biggest change in revenue and market share over the last decade. 


source: UBS 


source: Seeking Alpha 


As important as operational excellence might be, mobile operator executives, for example, believe mergers, acquisitions and alliances will drive most of the growth gains. 

source: KPMG


Historically, tier-one service providers have arguably obtained most of their growth from acquisitions, not organic growth. That makes sense in a business where organic growth is one to two percent per year. 


source: Deloittte 


Increasing scale is one way European telcos see revenue growth, notes McKinsey. Still, organic growth is propelled by changes in buyer demand (shift to internet access, for example) and operating performance.  Shifts of market share are less likely to drive growth. 


Organic growth is tougher when markets are saturated, and most connectivity markets are reaching saturation.


The point is that focusing on the core connectivity business makes sense, as that drives the bulk of total revenues. The problem is that no matter how well a competent connectivity provider does, connectivity services alone will not drive much revenue growth, which happens at about the rate of inflation. 


Market share shifts sometimes are possible, but mergers to boost scale have been significant drivers of revenue growth. In some cases, they have provided most of the revenue growth.


60% to 74% of Business Tech Buyers Narrow Choices Before They Interact With Sales

More than 70 percent of the buyer’s journey takes place before a sales engagement, Spiceworks says. Just as significantly, 59 percent of all cloud computing decision makers said they do the majority of their technology purchase research online, without speaking to a salesperson. 


Most information technology decision makers  (64 percent) prefer to do the majority of their tech purchase research online without speaking to a salesperson, says Spiceworks. Business decision makers are more willing to speak to a human: 59 percent would provide their name, email, and phone number to view interesting, gated content.


Small businesses IT professionals are even more committed to “no sales contact” during research.  Nearly three quarters--74 percent--do their research without any contact with supplier sales people. 


Decision makers in small businesses show a stronger preference for email (69 percent) compared to those in large businesses (57 percent). At the same time, decision makers in larger businesses show a greater preference than their counterparts in small businesses for social networks (24 percent vs. 13 percent), text messages (28 percent vs. 14 percent), and online banners (15 percent vs 2 percent).

IT Buyers' Preferred Mediums for Engaging with Tech Vendors

source: Spiceworks


Sunday, July 11, 2021

Hawthorne and Pearson on Productivity

Some of us seriously doubt we can deduce anything at all about short-term changes in productivity for work at home in most cases of knowledge or office work. The reason is Pearon’s Law and the Hawthorne Effect. 


Pearson's Law states that “when performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.” In other words, productivity metrics improve when people know they are being measured, and even more when people know the results are reported to managers. 


In other words, “what you measure will improve,” at least in the short term. It is impossible to know whether productivity--assuming you can measure it--actually will remain better over time, once the near term tests subside. 


What we almost certainly are seeing, short term, is a performance effect, as Pearson’s Law suggests. 


We also know that performance improves when people are watched. That is known as the Hawthorne Effect.    


In other words, much of what we think we see in terms of productivity is a measurement effect. There is no way to know what might happen when the short-term measurement effect wears off. 


source: Market Business News

Pearson's Law--Not Productivity Improvements--Is What We Now See from "Work From Home"

Some of us seriously doubt we can deduce anything at all about short-term changes in productivity for work at home in most cases of knowledge or office work. The reason is Pearon’s Law.



Pearson's Law states that “when performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.” In other words, productivity metrics improve when people know they are being measured, and even more when people know the results are reported to managers. 


In other words, “what you measure will improve,” at least in the short term. It is impossible to know whether productivity--assuming you can measure it--actually will remain better over time, once the near term tests subside. 


What we almost certainly are seeing, short term, is a performance effect, as Pearson’s Law suggests. 


 The exceptions include call center productivity, which is easier to quantify, in terms of output. 


Many argue, and studies maintain that remote work at scale did boost productivity. One might argue we actually have no idea, most of the time. 


That workers say they are more productive is not to say they actually are more productive. 


Also, worker satisfaction is not the same thing as productivity. Happy workers can be less productive; unhappy workers can be more productive. This is an apples compared to oranges argument, in all too many cases.  


With the caveat that subjective user reports are one thing, measurable results another, we likely must discount all self reports, whether they suggest higher, the same or lower productivity. 


The other issue is the difficulty of measuring knowledge work or office work productivity. Call center activities are among the easiest to measure quantitatively, and there is evidence that remote call center workers are, indeed, more productive. Whatever the case quotas, call center workers tend to finish those up faster when working at home. 


There is some evidence that work from home productivity actually is lower than in-office work. In principle--and assuming one can measure it--productivity increases as output is boosted using the same or fewer inputs. 


An initiative in Iceland, which has notably low productivity, suggests that service productivity by units of government does not suffer when working hours are reduced, and at least over the short term. Among the issues--aside from whether we can actually measure productivity in the studied settings--is Pearson’s Law at work. 


To be sure, service sector productivity is devilishly hard to measure, if it can be measured at all. It is hard to measure intangibles. And there is some evidence that satisfaction with public sector services is lower than private services, and substantially lower for many types of government services. 


Productivity is measured in terms of producer efficiency or effectiveness, not buyer or user perception of value. But it is hard to argue that the low perceived quality of government services is unrelated to “productivity.” 


source: McKinsey 


And what can be measured might not be very significant. Non-manufacturing productivity, for example, can be quite low, in comparison to manufacturing levels. 


And there are substantial differences between “services” delivered by private firms--such as airline travel or communications-- and those delivered by government, such as education, or government itself. 

 

The study argues that reductions in work hours per week of up to 12.5 percent had no negative impact on productivity. Methodology always matters, though. 


The studies relied on group interviews--and therefore user self reports--as well as some quantitative inputs such as use of overtime. There is some evidence of how productivity (output) remained the same as hours worked were reduced. 


For public service agencies, shorter working time “maintained or increased productivity and service provision,” the report argues. 


There is perhaps ambiguous quantitative evidence in the report of what was measured or how it was measured. The report says “shifts started slightly later and/or ended earlier.” To the extent that productivity (output) in any services context is affected directly by availability, the key would be the ability to maintain public-facing availability. The report suggests this happened. 


But the report says “offices with regular opening hours closed earlier.” Some might question whether this represents the “same” productivity. Likewise, “in a police station, hours for investigative officers were shortened every other week.” Again, these arguably are input measures, not output measures. 


So long as the defined output levels were maintained, the argument can be made that productivity did not drop, or might formally have increased (same output, fewer inputs). In principle, at least over the short term, it should be possible to maintain public-facing output while reducing working hours. Whether that is sustainable long term might be a different question. 


The report says organizations shortened meetings, cut out unnecessary tasks, and reorganized shift arrangements to maintain expected service levels. Some measures studied were the number of open cases, the percentage of answered calls or the number of invoices entered into the accounting system. 


In other cases the test seemed to have no impact on matters such as traffic tickets issued, marriage and birth licenses processed, call waiting times or cases prosecuted, for example. Some will say that is precisely the point: instances did not change as hours were reduced. 


Virtually all the qualitative reports are about employee benefits such as better work-life balance, though, not output metrics.


And Pearson’s Law still is at work.


Connectivity Networks are Becoming Computer Networks: What That Could Mean

“5G represents a paradigm shift, where the telecom industry is now taking substantial steps towards using the same building blocks as the IT industry,” says Ericssson. That is another way of saying telecom networks are becoming computer networks. 


And as networking is organized using a layered model, so too might all business processes be layered. 


source: Lifewire 


Think of the analogy to the ways all lawful applications run on IP networks: they use access supplied by third parties, with no required business relationship between the app providers and the access providers. 


To be sure, one entity might own both the transport network and the app, but that is not required. Google owns YouTube, Google search and Google Maps, which in part are transported over Google’s own global IP network. But common ownership is not required.


AIn the same way, telcos and cable TV companies own some lead apps, and also own access networks. But the relationship is not mandatory. They could own apps as well as networks. Those apps could be delivered over third party networks as well as their own networks. 

source: Ashridge on Operating Models


The point is that business operations are supported as layers on top of transport network layers. But those business and transport functions are logically separated. Ownership also is logically separated. 


In the future, that might allow different ways of structuring connectivity provider operations. In a sense, the way Comcast operates its theme parks, content studios and programming networks separately from its access networks provides an example. 


Each of those businesses runs independently of the access networks, though all have common ownership. 


source: Illinois Department of Innovation and Technology  


All that might have profound implications for the ways tier-one connectivity providers run their businesses. Connectivity providers run networks to support their core revenue-generating applications: broadband access, voice, business networks and content. 


As a practical matter, the network-operating functions increasingly are logically distinct from the application functions, as a Wi-Fi network is distinct from the apps using it. Perhaps the layers are not quite as distinct as they would be at Google or Facebook, where the app creation and business functions are logically distinct from the ownership and operation of core networks. 


But the principles are the same: all modern computer networks are based on separation of functions: logical separated from physical; higher layers isolated from lower layers; applications separated from networks. 


The obvious implication is that, over time, connectivity operations will more closely mirror the way all other networks work: transport functions separated from application functions; network functions logically separated from application, use case and revenue models. 


Historically, connectivity providers have bundled their core app or apps with construction and use of the network. In the future, as computer networks, those relationships could change. 


Already, any broadband access network allows lawful apps to be run on the connectivity network, with no business relationship between app owner and network owner. In the future, that might be further developed.  


The perhaps-obvious line of development is to further isolate business operations from the network, as Google’s YouTube, search, messaging, maps, Android and other business units are separated from the operation of Google’s own network. 


source: CB Insights


Assume a future where whole businesses (Google Maps, search, Android, Nest, Chromebook; Verizon mobility, voice, internet access, enterprise and business operations) are run independently of the transport and access networks. 


“Networks” are a service provided to the businesses, not a direct revenue generator. That is precisely how current telco or cable operations are structured already. Revenue is generated by services and apps sold to customers. The network exists only to facilitate the creation and sale of those apps. 


In principle, there no longer is any reason why applications and services need to be--or should be--developed or created to run solely on “my” networks. The bigger opportunity is to own apps and services that run on anybody’s network. 


Few would consider it “better” to create internet of services apps, platforms or services that only work on a single access provider’s network. It clearly is “better” if the platform, apps and services run on any access network, anywhere. 


But that requires a change not only of mindset but of business strategy. Today, most effort is spent trying to create value for things done “on my network.” In the future, some might do better creating value for apps, services and platforms that work anywhere, on any network. 


That assumes the continued existence of multiple competitors able to pursue such strategies. If competition is not the future connectivity framework, few if any access and transport providers will be allowed to spend much energy developing platforms, services or apps that run anywhere, on any network.


Instead, effort will revert to pre-competitive, monopoly objectives: just create and operate a competent access network.


Zoom Wants to Become a "Digital Twin Equipped With Your Institutional Knowledge"

Perplexity and OpenAI hope to use artificial intelligence to challenge Google for search leadership. So Zoom says it will use AI to challen...