Friday, February 26, 2021

The Counter-Intuitive Take on Wisdom of DirecTV Purchase

With the spin-off of AT&T’s linear video assets into a separate entity called DirecTV, owned 30 percent by TPG, there will be much commentary about the mistake AT&T made in buying DirecTV in the first place.  


That is a legitimate observation. An asset that depreciates from about $67 billion enterprise value (equity and debt) to perhaps $16 billion, in about five years, is hard to justify with the word “success.” 


In fact, you'd be hard pressed to find a single commentator arguing that the acquisition had any value at all.


But free cash flow is really important to AT&T, and the firm could not--for antitrust reasons--have done something more "sensible," such as buying additional mobile and fixed network assets. Facing a number of not-so-good alternatives, AT&T made a choice that almost certainly boosted free cash flow more than any other investment or acquisition antitrust authorities would have permitted.


On the other hand, nobody ever is able to demonstrate there was any other investment of $67 billion that AT&T could have made, in 2015, that would have generated $4.5 billion in annual free cash flow, immediately. The only possible alternative that would have done possibly done so, and passed antitrust review, was a series of smaller purchases of video content assets, had they been available. 


Content assets (studios or networks) tend to sell for enterprise value ratios ranging from 13 times to 23 times revenue. 


In other words, to buy $1 billion in free cash flow (using EBITDA) might require buying an asset costing somewhere between $13 billion and $23 billion. To acquire an asset generating $4.5 billion might have required purchasing assets costing between $58.5 billion and $103.5 billion.


The point is that generating an incremental $4.5 billion in free cash flow is an expensive proposition, even ignoring the time it can take to create and build a free cash flow stream of such size. 


Free cash flow matters for AT&T because it requires huge amounts of free cash flow to support its dividend and fund capital investments. 


To be sure, perhaps nobody actually expected free cash flow to be so threatened by attrition of the gross revenue stream. Some might argue that AT&T “should have” invested that same amount in its mobile or fixed networks businesses. 


Precisely what a mobile investment might have entailed is not so clear, as AT&T could not simply have purchased another mobile company, for antitrust reasons. The same argument holds for purchase of additional fixed network companies. Antitrust concerns made either of those choices impossible. 


If acquisitions were off the table, what about investments in the core networks? To be sure, AT&T might have invested in smaller cells, or more cells, to boost coverage or capacity. The problem is that I have seen no analysis that suggests how such investments--though increasing customer satisfaction or reducing churn--would generate $4.5 billion in additional free cash flow.


It might be argued that AT&T could have put all that cash ($67 billion) into fiber-to-home upgrades, for example. Even a cursory analysis would show that this could not have generated $4.5 billion immediately. It is doubtful if even at the end of five years, free cash flow would be significantly close to $4.5 billion. 


AT&T passes perhaps 62 million housing units. In 2015, it was able to deliver video to perhaps 33 million of those locations, mostly using a fiber to neighborhood approach. 


Upgrading just those 33 million locations to FTTH would take many years. A general rule of thumb is that a complete rebuild of a metro network takes at least three years, assuming capital is available to do so. 


Even if AT&T was to attempt a rebuild of those 33 million locations, and assuming it could build three million units every year, it would still take a decade to finish the nationwide upgrade. The point is that incremental free cash flow would not approach anything like $4.5 billion after a decade. 


Of those 33 million locations, perhaps 20 percent were already customers. Since AT&T competes against a cable operator at virtually all those locations, the issue is how much video market share AT&T could wrest away from the incumbent cable operator, which typically has about 60 percent share (a telco might get 20 percent, while satellite gets about 20 percent). 


So the realistic upside from new video subscribers is between a few percent to possibly 10 percent. Assume AT&T could build FTTH for a cost of about $2000 per location, and can build three million locations a year. At the end of one year, AT&T would have spent $6 billion. Assume subscription lift was 10 percent. That would add 300,000 video subscribers.


Assume annual revenue per account is about $80. That generates about $24 million in new revenue. Assume free cash flow is about 20 percent of gross revenue. That suggests incremental free cash flow of perhaps $4.8 million. 


Of course, FTTH also would allow At&T to compete with cable for internet access accounts more effectively. Assume the local cable operator has 70 percent share, while AT&T has 30 percent or less. 


Assume average annual revenue is $540 per account. Assume AT&T could gain five percent market share within a year. The FTTH upgrade then adds 150,000 additional internet access accounts, or about $81 million in incremental gross revenue. Assume margins are about 40 percent, and that all that is free cash flow (generous, but this is an exercise). 


Incremental broadband cash flow grows $32 million or so in incremental free cash flow. Add that to the video uplife and AT&T gains $37 million in incremental free cash flow from the FTTH upgrades. 


Assume the results are consistent across the homes AT&T could continue to upgrade (33 million total). After five years, AT&T might be able to grow free cash flow by $185 million annually, after investing $30 billion. Hopefully I have done the math correctly. 


In buying DirecTV, AT&T spent $67 billion. But it began earning $4.5 billion in free cash flow immediately. 


In other words, diverting $30 billion of capital to FTTH upgrades produces a fraction of the free cash flow that DirecTV did. True, the asset declined much faster than expected. But if the point is free cash flow generation, avoiding the DirecTV purchase and investing in FTTH would have produced far-worse returns. 


Sure, losing so much equity valuation is painful and unwanted. But investing in FTTH as an alternative might have been far worse, as a generator of free cash flow, even if asset values would arguably have been higher. 


Sometimes a firm has no good choices. This might have been one of those instances. Keep in mind that DirecTV is still spinning off $4 billion in annual free cash flow.


Thursday, February 25, 2021

No Spectrum, No Business

It perhaps goes without saying that ownership of spectrum licenses is a fundamental requirement for operation, at scale, for a mobile service provider. Business models can be constructed on a wholesale basis, but operation at scale, with higher profit margins, is only possible on a facilities-based basis.


Though many will worry about the debt implications of heavy spending by Verizon and AT&T to acquire mid-band spectrum, it is something of an existential matter. Without the new spectrum--despite the capital spending implications--neither firm could hope to remain competitive in the 5G era.


For 5G, mid-band spectrum now is crucial for coverage, as well as boosting speed. In the U.S. market, that has been supported by new mid-band spectrum made possible by the Citizens Broadband Radio Service and C-band spectrum auctions.


Both auctions are important, but the C-band spectrum will prove to be foundational for U.S. 5G, in the same way that the AWS-3 auctions were foundational for U.S. 4G.


Verizon was the top buyer, while AT&T acquired the second-greatest amount of assets. In fact, it appears all the spectrum that is available soonest (the A block) was acquired by those two firms. 

source: Sasha Javid 

Cellco Partnership, doing business as Verizon Wireless, won nearly 62 percent of the A block licenses, with AT&T buying 28.5 percent of the A block licenses. The A block was favored because it can be put to commercial use the fastest, with the least amount of spectrum clearing and time delay to do so. 


T-Mobile, already possessing a trove of mid-band spectrum, won licenses covering 72 markets. 


Most observers expect the A block spectrum to be commercially available by the end of 2021, while the other blocks will take longer to clear. Still, with the A block representing at least 80 percent of total C-band spectrum auctioned, it is clear that most of the C-band spectrum will be available for use quickly. 


Verizon paid more than $45 billion, while AT&T committed $23.4 billion. It appears both AT&T and Verizon acquired licenses in the same 406 markets, essentially blanketing the entire continental United States. 


Perhaps surprisingly, Comcast and Charter Communications, the largest U.S. cable operators, did not bid at all. 


source: Sasha Javid 


To give you some idea of the huge amount of mid-band coverage spectrum Verizon and AT&T acquired, consider that Verizon gained an average of 160 MHz nationwide in the C-band auctio, while AT&T got about 80 MHz coverage of about 95 percent of the land mass in the same auction. 


Prior to the C-band and 2020 Citizens Broadband Radio Service auctions, Verizon had less than 125 MHz of low-band mid-band spectrum in total. AT&T had less than 150 MHz in total spectrum assets in the low-band and mid-band ranges. 


Verizon more than doubled its trove of low-band and mid-band spectrum in a single auction. AT&T increased its low-band and mid-band assets by about 50 percent. 




Monday, February 22, 2021

How Bundle Behavior Distorts Price Comparisons

 One often sees comparisons of internet speeds or prices across countries. Such efforts always involved methodological choices that obscure or inflate differences. 

The researcher has to choose one type of plan that is common in all the evaluated countries; convert currencies; adjust for differences in general price levels to determine “cost,” not just nominal price; ignore any discounts or promotional plans customers actually might use and then ignore the actual buying behavior (studying only the retail plans themselves). 


For example, a recent survey of Malta consumers found that 94 percent purchased product bundles. About 36 percent purchased a quadruple play package including fixed network voice, entertainment video, internet access and mobility services. Some 39 percent bought a triple play package including fixed network voice, internet access and entertainment video. 

source: Malta Communications Authority 


That illustrates the potential misunderstandings possible when comparing international prices, adjusted for purchasing power parity or not. What matters is the actual behavior of consumers, not posted retail prices. 


The simple fact is that very few customers in Malta actually buy a stand-alone internet access service. 


If, as in Malta, most people buy bundles, then the stand-alone price for any single service has little meaning, since few people actually buy those products. 


To determine what people actually pay for their services, when multiple products are purchased in a bundle, is guesswork based on allocated costs. One has to attribute costs to each product in the bundle, and each service provider might have different objectives, in that regard. 


It is rational to protect gross revenue for the product or products generating the most revenue. It is rational to protect profit margins for the product or products that generate the most profit.  The bundle itself is a primary way of doing so, especially when the bundle reduces customer churn. 


Bundle purchases also complicate, for such reasons, efforts to compare prices of products across the planet. It is misleading to compare stand-alone prices for a product when most customers do not pay the posted retail prices when promotional plans are in force and widely purchased.


Sunday, February 21, 2021

Ecosystem, Platform, Pipe: What's the Difference?

Ecosystem might be the term that is replacing platform as a broad description of business strategy for would-be market leaders. And it is likely to be even harder to achieve, for most firms, in most industries. 


One study found that attempted platform failure rates were about 80 percent, about the same failure rate we see in information technology projects, firm startups and most change initiatives. 


The difference goes beyond nomenclature, even if many use the terms essentially interchangeably. 


Ecosystems can be thought of as “networks of networks.” Think of the internet. Does it have a single active organizer? Is it under any single entity’s control? Closer to home, think of the global telecom network, which is a network of networks. 


Travel is an industry that might be thought of as an ecosystem, but it is not centrally managed. A platform, on the other hand, always has an organizer or orchestrator


Platforms, on the other hand, can be created by a single entity, though not without the participation of many other full-ecosystem participants. Uber and Lyft can create platforms for ridesharing, but not lodging, which can be organized by firms such as Airbnb. 


Some might argue that the platform is the tool or foundation other products, supplied by third parties, build upon. In that taxonomy, the ecosystem would refer to the totality of products and relationships built on the platform. 


That makes less sense to me than than a taxonomy based on whether there is an active orchestrator or aggregator. A biological ecosystem has no human or mechanical organizer. It develops, but is not “planned” or “managed.” A platform always is managed, with admission and operating protocols. 


Network effects are key, in either case. A network effect essentially means that value grows as the number of users grows. It is different from mere scaling. More scale leads to lower unit costs, but no increase in value. 

source: Linkedstarsblog 


A network effect increases value as the number of users increases. 


One might generalize and argue that ecosystems are about increased end user value, while platforms are more directly about firm value and revenue. The total value of the Apple iPhone ecosystem for end users exceeds the value reaped by any single supplier participant. 


That noted, even when most of the ecosystem revenue is earned by third party participants, the orchestrator of the platform still gains value, if not always huge amounts of direct revenue, as when the orchestrator profits by transaction fees, commissions, licenses, support services, advertising, referrals or some other revenue model. 


An ecosystem is unitary; a platform can exist as one of many. Amazon or Alibaba can create platforms for some retail products, but arguably not all, while existing within the broad “retail” ecosystem. 


An ecosystem is a community of interacting entities, some say. The members of the ecosystem can be organizations, businesses and individuals, all creating value for one another in some way; mostly by producing or consuming goods and services.


A platform is the way a particular community or ecosystem is organized to interact with one another and to create value. A platform typically is focused on bringing part of the “ecosystem” together and reducing friction for interactions to take place. 


source: Kilian Veer 


Uber, Lyft, Airbnb, Google, Apple, Amazon and Facebook might be considered platforms that organize some of the value created by the internet. But the internet ecosystem is too big to be anything but a collection of value created by various platforms. 


What is more certain is that a business model based on building an ecosystem or platform is not the same as building a traditional pipe business model where a firm creates and sells a product or products that have a specific role in a value chain. 


By “pipe” is meant not just “communications services” but any product created and sold to customers. Most products and most firms historically use a “pipe” business model, in that sense. 


Indeed, there is a key difference between selling in an ecosystem and organizing the ecosystem. Amazon and Alibaba create and organize their platforms and ecosystems, if you believe both are ecosystems. We can say with certainty that both are platforms.


It might be debatable in some quarters whether those firms--or others such as eBay--actually have created ecosystems. 


Every other firm buying or selling on the or to the platform is part of the ecosystem. 


Every successful platform or ecosystem produces value in excess of what is contributed by any single firm. That is generally not true for a pipeline product, where value is fixed: it works, or does not; the specific problem is fixed; a specific capability is gained. 


The value of the entire internet exceeds the specific value contributed by each point solution. 


The role of any business or entity within a value chain is determined when it asks who are our customers? A chip foundry has chip suppliers as customers; chip suppliers have device manufacturers as customers; device suppliers have consumer or business users as customers; applications have users (subscription models) or advertisers as customers. 


Platforms can take aggregator or orchestration roles within an ecosystem. Platforms can create the marketplace or transaction venue (aggregating suppliers and buyers) or orchestrate the creation and operation of platforms. 


Platform business models and ecosystems, broadly speaking, seem destined to play a greater role in digitally-enabled industries. But most firms will never become platforms.

Scale Means Something Different for Platforms

Scale matters for any business, but has outsized returns for platform businesses. Simply, a pipeline business, which produces and sells a product, gains unit cost advantages from scale. 


A platform, on the other hand, gains unit cost advantages from growing scale, but really profits from a non-linear increase in value for end users, which leads to more transactions, typically. To put it another way, a typical firm gains when it sells more of whatever it aims to sell. 

source: Linkedstarsblog 


A platform gains when its members and participants buy and sell more of whatever they desire to sell or buy. 


Platforms use a different business model from typical pipeline businesses, in other words. A pipeline business gains from selling more units. A platform gains by adding more participants, facilitating more transactions, embracing more solutions and satisfactions for users. 


Digital-based applications platform firms scale so fast in part because of the economics of producing software, which has close to zero marginal cost at scale. Producing and distributing the next unit of a software product is quite low. 


Digital marketplaces and platforms also scale fast because they orchestrate the commercial use of assets owned by third parties. Amazon does not manufacture the products it sells. Uber does not own the fleets of autos that its driver partners use. Airbnb does not own the rooms that its users rent from participant suppliers. 


Marketplace platforms can scale as fast as they can add partners, which themselves organize the production of goods sold on the platform. Then the network effects kick in. The more product variety available, the greater the number of buyers have incentives to use the platform. 


Also, the more buyers using the platform, the more valuable the platform becomes for sellers. The platform gains value as the number of nodes (buyers and sellers) grows and transaction volume and speed increase.  


At some point, the large number of buyers also creates advertising value, creating an advertising platform as a byproduct of the marketplace transaction volume and user base. 


Aggregation and orchestration also add value. Amazon and Alibaba create a viable real channel for small firms that would not otherwise be able to reach potential buyers. The marketplaces amass a huge potential audience and prospect base--global or national rather than local--as well as the fulfillment mechanism.  


In a sense, huge marketplaces and platforms also aggregate skill, capital and other resources beyond the direct ability of any single firm to control. No single company can ever employ more than a small fraction of the talented, creative, smart or innovative people. In essence, a huge marketplace or platform makes all those resources available to be monetized. 


Some will argue the shift to orchestration or aggregation will be hard for cultural reasons. Others will simply point out that creating a platform requires the assent of many many other entities. Participating firms must conclude that being part of any specific platform has business advantages. Participating buyers must conclude that the platform has value enough to make it a choice over other buying alternatives. 


Nor is creation of a platform something a small firm can entertain. It takes resources to build platform scale. Firms that succeed in becoming platforms might start small, but they do not stay small very long. 


To be sure, scale matters even for pipeline firms. We see the same general effect of scale when looking at profit and market share for any pipeline firm in any industry.


80/20 Rule for Telecom Revenue Sources

This is a good illustration of both the Pareto theorem, which states that 80 percent of instances or outcomes in business or nature come from 20 percent of the cases or effort. The Pareto theorem is popularly known as the 80/20 rule.


The graph shows the percentage of total mobile service revenue generated by cell sites, if one apportions the mobile subscription fee to the sites that actually are used by any customer. 


Pareto accurately describes the actual use of mobile cell sites and radios, as well as the generation of revenue. 


Half of mobile revenue is driven from traffic on about 10 percent of sites. Fully 80 percent of revenue is driven by activity on just 30 percent of cell sites. 



source: Medium 


The theorem also explains why 80 percent of revenue generated by challengers in the telecom business come from about 20 percent of firms. 


In organizational terms, Pareto implies that 80 percent of results are driven by 20 percent of the actions or people. 


A perhaps-obvious question should arise: if 80 percent of results are generated by 30 percent of the instances, sites or actions, why bother with the other 70 percent? In part, the answer is network effect. A mobile operator whose network only covers 30 percent of the land mass where people actually live and work would not be able to compete with a supplier whose network covers nearly all the places people live and work. 


The traditional rule of thumb for a fixed network is that it makes money in urban areas, breaks even in suburban areas and loses money in rural areas. Profit is a Pareto distribution, but what mass market telco could survive if it refused to sell to rural or suburban customers?


What social, voice, messaging or other network would do as well if it connected just 30 percent of people you wanted to reach? In other words, a network often must connect “most” potential nodes to drive value. 


Universal service requirements for public telecom networks exist for that reason.


Pareto also exists because value for any single user depends on the number of other people or entities that specific person might ever wish to connect with. The actual set will be different for each person. But the network has to enable connections in unlimited fashion, so that any specific set can be created.


Saturday, February 20, 2021

How fast Will Fixed Networks Be, by 2050?

How fast will the headline speed be in most countries by 2050? Terabits per second is the logical conclusion, even if the present pace of speed increases is not sustained. Though the average or typical consumer does not buy the “fastest possible” tier of service, the steady growth of headline tier speed since the time of dial-up access is quite linear. 


And the growth trend--50 percent per year speed increases--known as Nielsen’s Law--has operated since the days of dial-up internet access. Even if the “typical” consumer buys speeds an order of magnitude less than the headline speed, that still suggests the typical consumer--at a time when the fastest-possible speed is 100 Gbps to 1,000 Gbps--still will be buying service operating at speeds not less than 1 Gbps to 10 Gbps. 


Though typical internet access speeds in Europe and other regions at the moment are not yet routinely in the 300-Mbps range, gigabit per second speeds eventually will be the norm, globally, as crazy as that might seem, by perhaps 2050. 


The reason is simply that the historical growth of retail internet bandwidth suggests that will happen. Over any decade period, internet speeds have grown 57 times. Since 2050 is three decades off, headline speeds of tens to hundreds of terabits per second are easy to predict. 

source: FuturistSpeaker 


Some will argue that Nielsen’s Law cannot continue indefinitely, as most would agree Moore’s Law cannot continue unchanged, either. Even with some significant tapering of the rate of progress, the point is that headline speeds in the hundreds of gigabits per second still are feasible by 2050. And if the typical buyer still prefers services an order of magnitude less fast, that still indicates typical speeds of 10 Gbps 30 Gbps or so. 


Speeds of a gigabit per second might be the “economy” tier as early as 2030, when headline speed might be 100 Gbps and the typical consumer buys a 10-Gbps service. 


source: Nielsen Norman Group 


If consumers on every continent purchased service at equivalent rates, in 2050 one would expect Asia to represent nearly 60 percent of demand, Africa nearly 20 percent. Europe would represent seven percent of demand, South America nine percent, North America four percent. 


source: Chegg 


Most observers would guess Asia will do about that well, while Africa lags. Europe and North America likely will over index, while South America might do about what the population alone would predict. 


Though the correlation might be less than one might expect, fiber to home deployment should correlate with terabit take rates in 2050. The wild card is 8G mobile access. As mobile speeds likewise continue to increase, most consumers might prefer wireless access to any fixed connection. 


In mobility as in the fixed network, the theoretical headline speed is not matched by mass market commercial experience. Still, the pattern has been that each next-generation mobile network features data speeds an order of magnitude higher than the prior generation. 

source: Voyager8 


Assume that in its last release, 5G offers a top speed of 20 Gbps. The last iteration of 6G should support 200 Gbps. The last upgrade of 7G should support 2 Tbps. The last version of 8G should run at a top speed of 20 Tbps.


At that point, the whole rationale of fixed network access will have been challenged, in many use cases, by mobility, as early as 6G. By about that point, average mobile speeds might be so high that most users can easily substitute mobile for fixed access.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...