Wednesday, July 14, 2021

Actually, AT&T Did Quite Well in Content and Video Subscription Businesses

Many will criticize telco failures to "innovate." Many will pan diversification efforts such as that made by AT&T into content ownership and entertainment video services. By one reckoning, AT&T actually did quite well.


Many will criticize telco failures to "innovate." Many will pan diversification efforts such as that made by AT&T into content ownership and entertainment video services. By one reckoning, AT&T actually did quite well.


It actually took only a handful of attempts before AT&T was able to emerge as a significant provider of video content, video subscriptions and internet access. In fact, it did not actually take many tries before AT&T and Verizon actually created roles for themselves in content and video. 


On June 24, 1998, AT&T acquired Tele-Communications Inc. for $48 billion, marking a reentry by AT&T into the local access business it had been barred from since 1984. 


Having spent about two years amassing a position in local access using resold local Bell Telephone Company lines, AT&T wanted a facilities-based approach, and believed it could transform the largely one-way cable TV lines into full telecom platforms. 


That move was but one among many made by large U.S. telcos since 1994 to diversify into cable TV, digital TV, satellite TV and fixed wireless, mostly with an eye to gaining share in broadband services of a few different types. 


By some accounts, TCI was at the time the second-largest U.S. cable TV provider by subscriber count, trailing only Time Warner. TCI had 33 million subscribers at the time of the AT&T acquisition. As I recall, TCI was the largest cable TV company by subscribers. 


For example, in 2004, six years after the AT&T deal, Time Warner Cable had just 10.6 million subscribers. In 2000, by some estimates, Time Warner had about 13 million subscribers. That undoubtedly is an enumeration of “product units” rather than “accounts.” Time Warner reached the 13 million account figure by about 2013, according to the NCTA


Since 1994, major telcos had been discussing--and making--acquisitions of cable TV assets. In 1992 TCI came close to selling itself to Bell Atlantic, a forerunner of Verizon. Cox Cable in 1994 discussed merging with Southwestern Bell, though the deal was not consummated. 


US West made its first cable TV acquisitions in 1994 as well. In 1995 several major U.S. telcos made acquisitions of fixed wireless companies, hoping to leverage that platform to enter the video entertainment business. Bell Atlantic Corp. and NYNEX Corp. invested $100 million in CAI Wireless Systems.


Pacific Telesis paid $175 million for Cross Country Wireless Cable in Riverside, Calif.; and another $160 to $175 million for MMDS channels owned by Transworld Holdings and Videotron in California and other locations. 


By 1996 the telcos backed away from the fixed wireless platforms. In fact, U.S. telcos have quite a history of making big splashy moves into alternative access platforms, video entertainment and other ventures, only to reverse course after only a few years. 


But AT&T in 1996 made a $137 million  investment in satellite TV provider DirecTV. 


Microsoft itself made an investment in Comcast in 1997, as firms in the access and software industries began to position for digital services including internet access, digital TV and voice services. In 1998 Microsoft co-founder Paul Allen acquired Charter Communications and Marcus Cable Partners. 


Those efforts, collectively, are well within the “one success in 10” rule of thumb, for any single firm, and close to it for the entire industry. More significantly, the amount of revenue generated by those efforts come well within the “one in 100” rules of innovative success for “blockbuster” impact. 


AT&T, remember, continues to own 70 percent to 80 percent of its former Time Warner content assets. It continues to benefit from the cash flow of DirecTV and its fixed network video services. It continues to drive cash flow from HBO Max.


And all that was achieved with far fewer than 10 attempts. By standard metrics of innovation, that clearly beats the odds.


What most will miss is the difficulty of making successful change in any organization, on a routine basis. As a rule of thumb, only about one in 10 efforts at change will succeed. Quite often, only about one in 100 successful innovations is truly consequential in terms of organization performance.


source: Organizing4Innovation 


That means we must tolerate a high rate of failure before we can hope for successful change. And we must fail quite a lot before we encounter a successful innovation with the power to change a company's or a whole industry's fortunes.


Of all the innovations connectivity providers have attempted--and been criticized for--how many have had industry-altering implications? Not many. Fixed network voice; mobile phones; internet access and possibly entertainment video subscriptions have been transformative.


Deregulation, privatization and competition have been historically transformative. But one might argue that was something that "happened to" the connectivity business, not necessarily an innovation of the industry itself.


Yes, we have seen many generations of business data networking services and business phone systems and services. But few have revenue magnitudes so great they change the fortunes of the industry or whole firms. In 150 years, only mobility and internet access have had clear industry-altering implications.


We all are familiar (even when we do not know it) with the sigmoid curve, otherwise know as the S curve, which describes the normal adoption curve for any successful product. We are less familiar with the idea that most innovations fail, whether that is new products, new technologies, new information technologies or business strategies. 


S curves apply only to successful innovations.


Most new products simply fail. In such cases there is no S curve.  The “bathtub curve” was developed to illustrate failure rates of equipment, but it applies to new product adoption as well. Only successful products make it to “userful life” (the ascending part of the S curve) and then “wearout” (the maturing top of the S curve before decline occurs). 


source: Reliability Analytics


Though nobody “likes” to fail, there is good reason for the advice one often hears to “speed up the rate of failure.” The advice is quite practical. 


Only about one in 10 innovations actually succeeds. Those of you who follow enterprise information technology projects will recognize the pattern: most efforts at IT change actually fail, in the sense of achieving their objectives. 

source: Organizing4Innovation 


“We tried that” often is the observation made when something new is proposed. What almost always is ignored is the high rate of failure for proposed innovations. About nine out of 10 innovations will probably fail. Most of us are not geared to handle that high rate of failure. 


Unwillingness to make mistakes almost ensures that an entity will fail in its efforts to grow, innovate or even survive. 


Those of you who follow startup success will recognize the pattern as well: of 10 funded companies only one will really be a wild success. Most startups do not survive

 

source: Techcrunch 


Connectivity providers are not uniquely free from the low success rate of most innovations. Innovation is hard. Most often efforts at innovation will fail. Even smaller efforts will fail nine times out of 10. An industry-altering innovation might happen only once in 100 attempts.


The more failure, the more the chances for eventual success. Many would consider telco initiatives in content and video subscriptions to have "failed." It is more accurate to call them an innovative success, given the relative handful of attempts to lead that business.


AT&T continues to own 70 percent of its former Time Warner content assets. It continues to benefit from the cash flow of DirecTV (about 71 percent ownership) and its fixed network video services. It continues to drive cash flow from HBO Max.


And all that was achieved with far fewer than 10 attempts. By standard metrics of innovation, that clearly beats the odds.


Why All Forecasts are Sigmoid Curves

STL Partners’ forecast for Open Radio Access Network investments--whether one agrees with the projections or not--does illustrate one principle: adoption of successful new technologies or products tends to follow theS curve growth model.


The S curve  has proven to be among the most-significant analytical concepts I have encountered over the years. It describes product life cycles, suggests how business strategy changes depending on where on any single S curve a product happens to be, and has implications for innovation and start-up strategy as well. 


source: Semantic Scholar 


Some say S curves explain overall market development, customer adoption, product usage by individual customers, sales productivity, developer productivity and sometimes investor interest. It often is used to describe adoption rates of new services and technologies, including the notion of non-linear change rates and inflection points in the adoption of consumer products and technologies.


In mathematics, the S curve is a sigmoid function. It is the basis for the Gompertz function which can be used to predict new technology adoption and is related to the Bass Model.


 I’ve seen Gompertz used to describe the adoption of internet access, fiber to the home or mobile phone usage. It is often used in economic modeling and management consulting as well.


Source: STL Partners


The following  graph illustrates the normal S curve curve of consumer or business adoption of virtually any successful product, as well as the need to create the next generation of product before the legacy product reaches its peak and then begins its decline. 


The graph shows the maturation of older mobile generations (2G, 3G) in red, with adoption of 4G in blue. What one sees is the maturing products are the top of the S curve (maturation and decline) while 4G represents the lower part of the S curve, when a product is gaining traction. 


The curves show that 4G is created and then is commercialized before 3G reaches its peak, and then declines, as the new product displaces demand for the old. 

source: GSA


Another key principle is that, successive S curves are the pattern. A firm or an industry has to begin work on the next generation of products while existing products are still near peak levels. 


source: Strategic Thinker


It also can take decades before a successful innovation actually reaches commercialization. The next big thing will have first been talked about roughly 30 years ago, says technologist Greg Satell. IBM coined the term machine learning in 1959, for example.


The S curve describes the way new technologies are adopted. It is related to the product life cycle. Many times, reaping the full benefits of a major new technology can take 20 to 30 years. Alexander Fleming discovered penicillin in 1928, it didn’t arrive on the market until 1945, nearly 20 years later.


Electricity did not have a measurable impact on the economy until the early 1920s, 40 years after Edison’s plant, it can be argued.


It wasn’t until the late 1990’s, or about 30 years after 1968, that computers had a measurable effect on the US economy, many would note.



source: Wikipedia


The point is that the next big thing will turn out to be an idea first broached decades ago, even if it has not been possible to commercialize that idea. 


The even-bigger idea is that all firms and industries must work to create the next generation of products before the existing products reach saturation. That is why work already has begun on 6G, even as 5G is just being commercialized. Generally, the next-generation mobile network is introduced every decade. 


source: Innospective


There are other useful predictions one can make when using S curves. Suppliers in new markets often want to know “when” an innovation will “cross the chasm” and be adopted by the mass market. The S curve helps there as well. 


Innovations reach an adoption  inflection point at around 10 percent. For those of you familiar with the notion of “crossing the chasm,” the inflection point happens when “early adopters” drive the market. 

source 


It is worth noting that not every innovation succeeds. Perhaps most innovations and products aimed at consumers fail, in which case there is no S curve, only a decline curve. 


source: Thoughtworks 


The consumer product adoption curve and the S curve also are related to the point at which early adopters are buyers, but before the mass market adoption starts. 


source: Advisor Perspectives 


Also, keep in mind that S curves apply only to successful innovations. Most new products simply fail. In such cases there is no S curve.  The “bathtub curve” was developed to illustrate failure rates of equipment, but it applies to new product adoption as well. Only successful products make it to “userful life” (the ascending part of the S curve) and then “wearout” (the maturing top of the S curve before decline occurs). 


Tuesday, July 13, 2021

State of the Internet


Lots of stats. 

Monday, July 12, 2021

What Drives Telco Growth?

Connectivity provider revenue growth mostly comes from acquisitions and mergers, despite some thinking that organic growth is the key. 


As fast as T-Mobile has been growing in the U.S. market, its merger with Sprint has driven the biggest change in revenue and market share over the last decade. 


source: UBS 


source: Seeking Alpha 


As important as operational excellence might be, mobile operator executives, for example, believe mergers, acquisitions and alliances will drive most of the growth gains. 

source: KPMG


Historically, tier-one service providers have arguably obtained most of their growth from acquisitions, not organic growth. That makes sense in a business where organic growth is one to two percent per year. 


source: Deloittte 


Increasing scale is one way European telcos see revenue growth, notes McKinsey. Still, organic growth is propelled by changes in buyer demand (shift to internet access, for example) and operating performance.  Shifts of market share are less likely to drive growth. 


Organic growth is tougher when markets are saturated, and most connectivity markets are reaching saturation.


The point is that focusing on the core connectivity business makes sense, as that drives the bulk of total revenues. The problem is that no matter how well a competent connectivity provider does, connectivity services alone will not drive much revenue growth, which happens at about the rate of inflation. 


Market share shifts sometimes are possible, but mergers to boost scale have been significant drivers of revenue growth. In some cases, they have provided most of the revenue growth.


60% to 74% of Business Tech Buyers Narrow Choices Before They Interact With Sales

More than 70 percent of the buyer’s journey takes place before a sales engagement, Spiceworks says. Just as significantly, 59 percent of all cloud computing decision makers said they do the majority of their technology purchase research online, without speaking to a salesperson. 


Most information technology decision makers  (64 percent) prefer to do the majority of their tech purchase research online without speaking to a salesperson, says Spiceworks. Business decision makers are more willing to speak to a human: 59 percent would provide their name, email, and phone number to view interesting, gated content.


Small businesses IT professionals are even more committed to “no sales contact” during research.  Nearly three quarters--74 percent--do their research without any contact with supplier sales people. 


Decision makers in small businesses show a stronger preference for email (69 percent) compared to those in large businesses (57 percent). At the same time, decision makers in larger businesses show a greater preference than their counterparts in small businesses for social networks (24 percent vs. 13 percent), text messages (28 percent vs. 14 percent), and online banners (15 percent vs 2 percent).

IT Buyers' Preferred Mediums for Engaging with Tech Vendors

source: Spiceworks


Sunday, July 11, 2021

Hawthorne and Pearson on Productivity

Some of us seriously doubt we can deduce anything at all about short-term changes in productivity for work at home in most cases of knowledge or office work. The reason is Pearon’s Law and the Hawthorne Effect. 


Pearson's Law states that “when performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.” In other words, productivity metrics improve when people know they are being measured, and even more when people know the results are reported to managers. 


In other words, “what you measure will improve,” at least in the short term. It is impossible to know whether productivity--assuming you can measure it--actually will remain better over time, once the near term tests subside. 


What we almost certainly are seeing, short term, is a performance effect, as Pearson’s Law suggests. 


We also know that performance improves when people are watched. That is known as the Hawthorne Effect.    


In other words, much of what we think we see in terms of productivity is a measurement effect. There is no way to know what might happen when the short-term measurement effect wears off. 


source: Market Business News

Pearson's Law--Not Productivity Improvements--Is What We Now See from "Work From Home"

Some of us seriously doubt we can deduce anything at all about short-term changes in productivity for work at home in most cases of knowledge or office work. The reason is Pearon’s Law.



Pearson's Law states that “when performance is measured, performance improves. When performance is measured and reported back, the rate of improvement accelerates.” In other words, productivity metrics improve when people know they are being measured, and even more when people know the results are reported to managers. 


In other words, “what you measure will improve,” at least in the short term. It is impossible to know whether productivity--assuming you can measure it--actually will remain better over time, once the near term tests subside. 


What we almost certainly are seeing, short term, is a performance effect, as Pearson’s Law suggests. 


 The exceptions include call center productivity, which is easier to quantify, in terms of output. 


Many argue, and studies maintain that remote work at scale did boost productivity. One might argue we actually have no idea, most of the time. 


That workers say they are more productive is not to say they actually are more productive. 


Also, worker satisfaction is not the same thing as productivity. Happy workers can be less productive; unhappy workers can be more productive. This is an apples compared to oranges argument, in all too many cases.  


With the caveat that subjective user reports are one thing, measurable results another, we likely must discount all self reports, whether they suggest higher, the same or lower productivity. 


The other issue is the difficulty of measuring knowledge work or office work productivity. Call center activities are among the easiest to measure quantitatively, and there is evidence that remote call center workers are, indeed, more productive. Whatever the case quotas, call center workers tend to finish those up faster when working at home. 


There is some evidence that work from home productivity actually is lower than in-office work. In principle--and assuming one can measure it--productivity increases as output is boosted using the same or fewer inputs. 


An initiative in Iceland, which has notably low productivity, suggests that service productivity by units of government does not suffer when working hours are reduced, and at least over the short term. Among the issues--aside from whether we can actually measure productivity in the studied settings--is Pearson’s Law at work. 


To be sure, service sector productivity is devilishly hard to measure, if it can be measured at all. It is hard to measure intangibles. And there is some evidence that satisfaction with public sector services is lower than private services, and substantially lower for many types of government services. 


Productivity is measured in terms of producer efficiency or effectiveness, not buyer or user perception of value. But it is hard to argue that the low perceived quality of government services is unrelated to “productivity.” 


source: McKinsey 


And what can be measured might not be very significant. Non-manufacturing productivity, for example, can be quite low, in comparison to manufacturing levels. 


And there are substantial differences between “services” delivered by private firms--such as airline travel or communications-- and those delivered by government, such as education, or government itself. 

 

The study argues that reductions in work hours per week of up to 12.5 percent had no negative impact on productivity. Methodology always matters, though. 


The studies relied on group interviews--and therefore user self reports--as well as some quantitative inputs such as use of overtime. There is some evidence of how productivity (output) remained the same as hours worked were reduced. 


For public service agencies, shorter working time “maintained or increased productivity and service provision,” the report argues. 


There is perhaps ambiguous quantitative evidence in the report of what was measured or how it was measured. The report says “shifts started slightly later and/or ended earlier.” To the extent that productivity (output) in any services context is affected directly by availability, the key would be the ability to maintain public-facing availability. The report suggests this happened. 


But the report says “offices with regular opening hours closed earlier.” Some might question whether this represents the “same” productivity. Likewise, “in a police station, hours for investigative officers were shortened every other week.” Again, these arguably are input measures, not output measures. 


So long as the defined output levels were maintained, the argument can be made that productivity did not drop, or might formally have increased (same output, fewer inputs). In principle, at least over the short term, it should be possible to maintain public-facing output while reducing working hours. Whether that is sustainable long term might be a different question. 


The report says organizations shortened meetings, cut out unnecessary tasks, and reorganized shift arrangements to maintain expected service levels. Some measures studied were the number of open cases, the percentage of answered calls or the number of invoices entered into the accounting system. 


In other cases the test seemed to have no impact on matters such as traffic tickets issued, marriage and birth licenses processed, call waiting times or cases prosecuted, for example. Some will say that is precisely the point: instances did not change as hours were reduced. 


Virtually all the qualitative reports are about employee benefits such as better work-life balance, though, not output metrics.


And Pearson’s Law still is at work.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...