Sunday, November 8, 2020

Irresistible Storylines That Always are Wrong

Some storylines are irresistible. Slow U.S. 5G speeds provide an example. A classic storyline about U.S. telecommunications is “U.S. is behind.”


Author Steven Pressfield, in his book Nobody Wants to Read Your Sh*t, points out the elements of any story. These universal principles of storytelling include:

1) Every story must have a concept. It must put a unique and original spin, twist or framing device upon the material.

2) Every story must be about something. It must have a theme.

3) Every story must have a beginning, a middle, and an end. Act One, Act Two, Act Three.

4) Every story must have a hero.

5) Every story must have a villain.

6) Every story must start with an Inciting Incident, embedded within which is the story’s climax.

7) Every story must escalate through Act Two in terms of energy, stakes, complication and significance/meaning as it progresses.

8) Every story must build to a climax centered around a clash between the hero and the villain that pays off everything that came before and that pays it off on-theme.


That is a framework often used when writers talk about the state of U.S. telecommunications. U.S. 5G speeds are slow, compared to most other markets. There are reasons. U.S. service providers are relying on low-band spectrum for coverage, and that necessarily limits speeds. Most of the leading U.S. mobile operators, with the exception of T-Mobile, have little mid-band spectrum, which is the preferred band globally.


So U.S. mobile speeds are slow, and have been relatively slow, even for 4G services. 


That is a necessary evil at the moment, as there is little unencumbered mid-band spectrum available at the moment, in the U.S. market, though that will change as more mid-band spectrum is reallocated for mobile use. 


But the “U.S. is behind” storyline has been used often over the last several decades. Indeed, where it comes to plain old voice service, the U.S. is falling behind meme never went away.


In the past, it has been argued that the United States was behind, or falling behind, for use of mobile phones, smartphones, text messaging, broadband coverage, fiber to home, broadband speed or broadband price


In the case of mobile phone usage, smartphone usage, text message usage, broadband coverage or speed, as well as broadband prices, the “behind” storyline has proven incorrect, over time. 


Some even have argued the United States was falling behind in spectrum auctions. That clearly also has proven wrong. What such observations often miss is a highly dynamic environment, where apparently lagging U.S. metrics quickly are closed.


To be sure, adoption rates have sometimes lagged other regions. Some storylines are repeated so often they seem true, and lagging statistics often are “true,” early on. The story which never seems to be written is that there is a pattern here: early slowness is overcome; performance metrics eventually climb; availability, price and performance gaps are closed over time. 


The early storylines often are correct, as far as they go. That U.S. internet access is slow and expensive, or that internet service providers have not managed to make gigabit speeds available on a widespread basis, can be correct for a time. Those storylines rarely--if ever--hold up long term. U.S. gigabit coverage now is about 80 percent, for example. 


Other statements, such as the claim that U.S. internet access prices or mobile prices are high, are not made in context, or qualified and adjusted for currency, local prices and incomes or other relevant inputs, including the comparison methodology itself. 


Both U.S. fixed network internet prices and U.S. mobile costs have dropped since 2000, for example. 


The point is that the “U.S. is behind” storyline seems irresistible. That storyline has always proven incorrect, though, over time. The historically-accurate storyline is that “slow start” is what we see. Over some time, U.S. metrics tend to rise to about 12th to 15th globally, but no higher, ever. 


The bottom line is that it is quite typical for U.S. performance for almost any important new infrastructure-related technology to lag other nations. It never matters, in the end. 


Eventually, the U.S. ranks somewhere between 10th and 20th on any given measure of technology adoption. That has been the pattern since the time of analog voice. 


We often forget that six percent of the U.S. landmass is where most people live. About 94 percent of the land mass is unpopulated or lightly populated. And rural areas present the greatest challenge for deployment of communications facilities, or use of apps that require such facilities.

Saturday, November 7, 2020

Combining Network Access and Apps Businesses a Growing Trend

Are new service provider models--combining connectivity and apps--emerging? Some point to the examples of Rakuten, the Japanese online e-tailer that also has entered the mobile service provider business, or Reliance Jio, which includes both the Reliance Jio mobile business and a collection of digital content, transactions and apps businesses. 


Others would point to moves by telcos and cable companies into content ownership.


Infrastructure is intersecting with digital services such as you have seen with Rakuten and Jio,” says Steve Mollenkopf, Qualcomm CEO. 


Others might add moves by the likes of Google, Facebook into connectivity service provider businesses (satellite, fiber to the home, mobile service provider) or infrastructure (Telecom Infra Project) or devices (e-readers, smart speakers, video streaming devices). 


In fact, what all those moves show is expansion across the internet value chain by app providers into connectivity services, infrastructure and device portions of the ecosystem. Connectivity providers have made some moves into new applications, primarily entertainment video, and some are hopeful about new roles in edge computing or the internet of things.


At least so far, one might well argue that it has proven easier for app providers to move into adjacencies than for connectivity providers to do so. 


It might be a fruitful question to ask why that is the case, as any move into adjacent value chain roles involves moving outside the area of core competency. Such moves often also involve mastery of functions higher or lower on the protocol stack, so there is a possible challenge in terms of moving up the stack or down the stack.


source: Vermont IT Group


Some might argue it is--all other things being equal--easier to move down the stack than up the stack. When moving down the stack, the entity making the move is the “end user” or “business process” provider. Put simply, the advantage is that the business process provider knows exactly what it requires from lower levels of the stack. 


Matters are different for an entity moving from lower in the stack to higher levels. Lower levels increasingly are “horizontal” in focus, designed to support literally any conceivable buyer, entity or business function. A connectivity network is designed to support any device or user with a need for internet protocol communications, or any device using a specific standard, such as 4G or 5G or Wi-Fi. 


That is a lowest common denominator approach, and makes sense. In contrast, a business process provider knows precisely what it requires from lower levels, as those levels support its specific business. In many cases, it is not so much features but costs that are of concern. 


As “same functionality, lower price” or “higher functionality, lower price” always is an easily understood value proposition, so too are business process provider value drivers when moving down the stack. The reason hyperscale app providers build and own their own subsea networks is that they get what they want at lower overall costs. 


In other words, the business process provider knows precisely what it requires. The companies lower in the stack “have to guess” at what potential buyers will want, and have to be prepared to support all potential buyers (lowest common denominator) or optimize for a few verticals. 


The lowest common denominator strategy offers the greatest potential scale, but also the least differentiation. That is one reason many believe network slicing--the ability to create custom virtual private networks with distinct performance characteristics--is important. 


Network slicing might solve this problem (lowest common denominator versus optimized features), as experienced by connectivity providers.


There are some other, perhaps more subtle advantages for business process providers moving down the stack. Ubiquitous internet access helps app providers since their ability to gain and keep a customer requires internet access. That makes hyperscale app providers big supporters of ubiquitous, high quality, affordable internet access.


There are fewer obvious synergies for entities trying to move up the stack. It is hard to displace dominant suppliers in any of the stacks, so it almost always makes sense to specialize or differentiate when moving into any new adjacency, and especially up the stack. 


But that also poses a problem of scale, as differentiation necessarily means aiming for a segment of the market. Essentially, that narrows the potential financial return. Consider the possible roles for connectivity providers as internet of things platforms. 


Most would likely agree that no single provider can be successful in every business vertical. So Verizon has attempted to be a platform provider in the automotive space, and pitches its ThingSpace as a platform for connecting IoT devices to Verizon’s network. Some might note that the “platform” is mostly subscriber identification modules providing the communication function on Verizon’s mobile network. Some will question whether that is what it meant by the term “platform.”


Still, possible moves up or down the stack seem a growing issue for some tier-one service providers, simply because revenue growth opportunities in the core business are reaching, or already have reached, saturation. That can be seen in the percentage of total revenue coming from outside the communications service core. 


source: GSMA 


As this chart suggests, tier-one service providers are betting on growth outside their legacy communications core, and many have made substantial progress. 


If it is true that infrastructure and apps/content businesses are becoming synergistic, we can expect to see more moves blending the two--connectivity and apps--in the future, under common ownership.


Thursday, November 5, 2020

New Proposed DirecTV Asset Sale Makes More Sense

The report that AT&T is in discussions with private-equity firms to sell a significant minority stake in its DirecTV, AT&T Now and U-Verse pay-TV businesses is not surprising. Nor is the way the potential deal is being structured. 


As described, the transaction that would shift legacy assets off AT&T’s balance sheet, but also allow AT&T to retain “majority economic ownership of the businesses.” That is important for its cash flow implications. 


My own analysis of the original value of the DirecTV purchase was its impact on AT&T cash flow. That is the same key issue for any sale of the asset: what does it do to cash flow?


As originally rumored, AT&T was looking to sell most of its DirecTV holding. That seemed unlikely to me simply because, in doing so, AT&T would be parting with an important source of free cash flow, which might have been as much as 13 percent of total cash flow, or possibly more, by some accounts 


In late 2019 DirecTV was said to be spinning off about $4 billion in annual cash flow, which seems low to me, but appears to be about right. In the first full year of ownership, DirecTV likely produced $12 billion of free cash flow. 


That matters as AT&T needs prodigious amounts of cash to support its dividend payouts and to reduce outstanding debt. In 2019, for example, DirecTV cash flow represented as much as 93 percent of total interest payments and a huge portion of revenue


Cash flow generators that big are rare, indeed. Though controversial, it is hard to conceive of any other investment AT&T could have made in 2014 that could possibly have generated as much free cash flow as did DirecTV.


To be sure, linear video in general, and satellite linear in particular, have been hard hit since then. The asset value has atrophied, to be sure. But if cash flow is the objective, some of us cannot conceive of any other acquisition that would have been possible and not drawn antitrust objections. 


The new rumored deal would reportedly include 30 percent to 49 percent of the combined pay-TV distribution businesses, moving assets off AT&T’s balance sheet, but preserving much of the cash flow.


That at least answers the question some might have had about any exit from DirecTV in total.


Tuesday, November 3, 2020

Competition and New Technology Underpin Near-Zero Pricing Trend

It is a truism that competition and new technology, in combination, have fundamentally changed the global telecom business. We all intuitively understand that competition leads to lower prices, or that technology allows disintermediation of value chains, which removes cost. 

source: A.D. Little 


One of the few core assumptions I always have used in my analytical work concerning the connectivity business is near zero pricing is a foundational trend for all connectivity products, as it tends to be also for computing products. Consider internet transit pricing, for example 


Back in 2014, Cloudflare estimated the cost of wide area network bandwidth as being lowest in Europe, in large part because so much internet traffic used peering rather than transit. 


source: Cloudflare


Two years later, in 2016, costs had dropped. The Middle East has the lowest WAN costs, and costs in other reasons had dropped significantly. Where Australia’s costs were as much as 20 times higher than Europe’s costs, two years later the Australian costs were six times higher than Europe’s costs. 

source: Cloudflare


None of you would be surprised if transit prices continued to fall. Transit to Sydney, for example, had declined to about $5 per Mbps, where back in 2014 prices had been about $100 per Mbps. 

source: TeleGeography


Both Netflix and Microsoft business models seem to have been built on an expectation of  

near-zero pricing for a core input, computing cost for Microsoft, bandwidth cost for Netflix. 


The most-startling strategic assumption ever made by Bill Gates was his belief that horrendously-expensive computing hardware would eventually be so low cost that he could build his own business on software for ubiquitous devices. .


How startling was the assumption? Consider that, In constant dollar terms, the computing power of an Apple iPad 2, when Microsoft was founded in 1975, would have cost between US$100 million and $10 billion.


source: Hamilton Project


The point is that the assumption by Gates that computing operations would be so cheap was an astounding leap. But my guess is that Gates understood Moore’s Law in a way that the rest of us did not.


Reed Hastings, Netflix founder, apparently made a similar decision. For Bill Gates, the insight that free computing would be a reality meant he should build his business on software used by computers.


Reed Hastings came to the same conclusion as he looked at bandwidth trends in terms both of capacity and prices. At a time when dial-up modems were running at 56 kbps, Hastings extrapolated from Moore's Law to understand where bandwidth would be in the future, not where it was “right now.”


“We took out our spreadsheets and we figured we’d get 14 megabits per second to the home by 2012, which turns out is about what we will get,” says Reed Hastings, Netflix CEO. “If you drag it out to 2021, we will all have a gigabit to the home." So far, internet access speeds have increased at just about those rates.


The scary point is that prices in the telecom business seem to have a “near-zero” trend. That does not mean absolute zero, but simply prices so low users and customers do not have to think much about using the products. 


That, of course, has fundamental implications for owners of connectivity businesses. Near-zero pricing helps create demand for internet access services, even as substitutes emerge for core voice and messaging services. 


Near-zero pricing enables the construction and operation of the networks and creation of the apps and services delivered over the networks. Near-zero pricing also enables new business models that were impossible in the analog era.


How Much Can Telcos Cut Sales Costs?

Intangible products such as music, video, print content, banking transactions and even communications services are among those most easily sold “online” or “digitally,” displacing physical forms of distribution. 


Communications products also are intangible, so a logical question is how channels of distribution might change over time, with “sales” and “fulfillment” becoming more virtual and less physical. 

source: A.D. Little 


The issue is whether digital fulfillment allows connectivity providers to cut operating costs or capital investment.  


For consumer mobility services, the switch might be experienced as ordering a new phone online and then activating online, with no need to visit a physical retail outlet. Small business customers might find basic data and voice services could be ordered online as well. 


Eventually, even more complicated enterprise services might be sold without use of sales forces. That might seem fanciful, but consider the value of a human enterprise services sales force: expert knowledge of how to match network services with business process support.  


But consider the traditional value of human sales forces, which understand the complexity of network offers as well as the requirements for business process support, and can match needs with solutions. 


Even if we assume that every enterprise situation is custom, to a significant extent, there are patterns, which means rules can be created. 


And any rules-based process can be enhanced by use of artificial intelligence systems. That means, in principle, that the value of human experts should be capable of replication in an AI-enhanced sales and fulfillment process. 


source: A.D. Little 


If one assumes that connectivity providers must reduce operating and capital investment costs to maintain profit margins in slow-growth to no-growth markets, then reducing sales and customer care costs are among the areas where the biggest opportunities for savings might be found. 


Monday, November 2, 2020

Fixed Network Business Models Now Based on "Dumb Pipe"

Intangible products such as music, video, print content, banking transactions and even communications services are among those most easily sold “online” or “digitally.” Another way of describing the change in channels of distribution is to note that, over time, “sales” and “fulfillment” became more virtual and less physical. 


So the issue is the extent to which connectivity services sold to consumers and small businesses might also become more “virtual” over time. For consumer mobility services, the switch might be experienced as ordering a new phone online and then activating online, with no need to visit a physical retail outlet. 


source: A.D. Little 


That retail virtualization is perhaps a mirror of the content and applications virtualization that already has reshaped the connectivity business. 


“Over the top” applications and services are more than a revenue model, a strategy and an asset ownership model. They reflect fundamental changes in how computing and communications networks are designed and operated. 


In a broad sense, OTT represents the normal way any computing network operates, and since all telecom networks now are computer networks, there are clear business model implications. 


Though it is so familiar we hardly notice it any more, communications network architecture, computing and software architecture also mirror a profound change in possible communications, media and content industry business models. 


The separation of access from apps, transport from other functions now is a fundamental reality of communications, software design and applications. The whole idea is to compartmentalize and separate computing or communications functions so development can happen elsewhere without disrupting everything else. 


The desired outcome is the ability to use any app on any device on any network, while making changes and upgrades rapidly within each layer or function. Abstraction is another way of describing the architecture. Devices do not require detailed knowledge of what happens inside the network black box (which is where the notion of “cloud” came from). 


Devices only need to know the required interface. That also explains the prevalence of application programming interfaces, which likewise allow the use of abstracted functions. 


What we often forget is that these technology conventions have business model implications. Simply stated, the business model (all the inputs and operations needed to supply a product to a customer for a profit) mirrors the architecture of software and networks.

source: Henry Chesbrough 


Which is to say business models now are built on abstracted ecosystems and value chains. The clearest illustration of that is the phrase “over the top,” which describes the ability of any third party application or service provider to reach any customer or user on any standard internet connection.


That “open” process contrasts sharply with the old “closed” analog telco model where the only apps or devices that could be used on the network were owned or permitted by the connectivity services provider. 


That is why the terms “over the top” and “dumb pipe” have developed. Where in the past telcos sold services that used a network (voice, messaging, video entertainment), now they also sell “data network access,” where the product the customer buys is, strictly speaking, a “dumb pipe” that enables access to applications. 


The irony is that, to the extent the dumb pipe internet access is the foundatinal service now sold to fixed network consumers, and a core product for mobile network customers, revenue streams now are built on dumb pipe.


Keep in mind that all telecom networks now are computer networks. The value lies in enabling access to applications. Some of those apps are owned by the connectivity provider (public network voice, public network messaging, linear or OTT entertainment video, virtual private network services, private line, hosted voice and--in some cases--enterprise applications. 


But the dominant value of the dumb pipe internet access is access to all other third party applications based on delivery using the public internet. 


The great irony is that, as much as connectivity providers “hate” being dumb pipe providers, their business models now are based on it.


Friday, October 30, 2020

"Digital Transformation" Will be as Hard as Earlier Efforts at Change

New BCG research suggests that 70 percent of digital transformations fall short of their objectives. 


That would not surprise any of you familiar with the general success rate of major enterprise technology projects. From 2003 to 2012, only 6.4 percent of federal IT projects with $10 million or more in labor costs were successful, according to a study by Standish, noted by Brookings.

source: BCG 


IT project success rates range between 28 percent and 30 percent, Standish also notes. The World Bank has estimated that large-scale information and communication projects (each worth over U.S. $6 million) fail or partially fail at a rate of 71 percent. 


McKinsey says that big IT projects also often run over budget. Roughly half of all large IT projects—defined as those with initial price tags exceeding $15 million—run over budget. On average, large IT projects run 45 percent over budget and seven percent over time, while delivering 56 percent less value than predicted, McKinsey says. 


Significantly, 17 percent of IT projects go so bad that they can threaten the very existence of the company, according to McKinsey. 


The same sort of challenge exists whenever telecom firms try to move into adjacent roles within the internet or computing ecosystems. As with any proposed change, the odds of success drop as the number of successful approvals or activities increases.


The rule of thumb is that 70 percent of organizational change programs fail, in part or completely. 


There is a reason for that experience. Assume you propose some change that requires just two approvals to proceed, with the odds of approval at 50 percent for each step. The odds of getting “yes” decisions in a two-step process are about 25 percent (.5x.5=.25). In other words, if only two approvals are required to make any change, and the odds of success are 50-50 for each stage, the odds of success are one in four. 


source: John Troller 


The odds of success get longer for any change process that actually requires multiple approvals. Assume there are five sets of approvals. Assume your odds of success are high--about 66 percent--at each stage. In that case, your odds of success are about one in eight for any change that requires five key approvals (.66x.66x.66x.66x.66=82/243). 


The same sorts of issues occur when any telecom firm tries to move out of its core function within the ecosystem and tries to compete in an adjacent area. 


Consultants at Bain and Company argue that the odds of success are perhaps 35 percent when moving to an immediate adjacency, but drop to about 15 percent when two steps from the present position are required and to perhaps eight percent when a move of three steps is required.

source: Bain and Company


The common thread here is that any big organizational change, whether an IT project or a move into new roles within the ecosystem, is quite risky, even if necessary. The odds of success are low, for any complex change, no matter how vital.


DIY and Licensed GenAI Patterns Will Continue

As always with software, firms are going to opt for a mix of "do it yourself" owned technology and licensed third party offerings....