Friday, December 30, 2022

FTTH Does Not Predict Gigabit Take Rates

The latest data on United Kingdom fiber to home coverage shows that FTTH availability  is not the same thing as “homes that actually buy FTTH service.” First of all, there are alternative cable hybrid fiber coax networks that seem to represent the majority of U.K. accounts buying service at gigabit-per-second rates. 


source: ThinkBroadband 


That is not to deny FTTH adoption rates will climb over time. But FTTH availability does not highly correlate with consumer demand. Nor does FTTH availability highly correlate with “speed tier purchased.”


In the U.S. market, for example, AT&T says that about 30 percent of customers in areas where FTTH is available buy a speed tier of 1 Gbps. The rest buy some other lower speed tier. 


The implication is that FTTH enables faster speeds, but customer demand does not highly correlate with uptake of service tiers at the highest advertised available rate. FTTH might be “necessary” for some internet service providers, but it is not “sufficient” to drive gigabit service tier take rates. 


Customers tend to buy service plans that offer neither the slowest speeds nor the fastest, but someplace in the middle that offers a value proposition that is “good enough quality for a reasonable price.”


In markets with competitors using their own facilities, take rates for the fastest tiers of service might always be limited, as competent competitors will get a significant share of what demand exists. 


In two-provider markets, that share could range from 40 percent to perhaps 50 percent. In markets with multiple providers operating at scale, it is conceivable that take rates could dip into the 20ish-percent range. 


That degree of market share is likely sustainable for firms with low operating cost structures. Others might find they are not profitable at levels below about 30 percent.


Monday, December 26, 2022

FTX is Like Enron Broadband, WorldCom: Fraud Amidst the Hype--and Ultimate Reality--of Big Next Things

Most commentators have likened the scandal over FTX  to the Ponzi scheme run by Bernie Madoff. 


Some of us see more analogies to Enron Broadband and WorldCom. In both those cases, promising and hyped technology-based businesses mixed with hyper-aggressive accounting and outright fraud, ultimately leading to bankruptcy that was a contagion spreading to the rest of the industries they touched. 


Enron’s collapse, along with that of Worldcom, led to jail terms for CEOs and exposed other inflated oversupply issues that took a good decade to work off. Doubtless, the FTX scandal has caused some spillover in the blockchain and cryptocurrency spheres that likewise will take some time to work off, as excess capacity roiled the transport business for years.  


But Enron, Worldcom and FTX also illustrate the excessive optimism that often accompanies big shifts of business and technology revenue opportunities. For Enron and Worldcom, the driver was the emergence of the internet. For FTX, it was cryptocurrency (not blockchain, per se). 


Back in 1999, broadband was among the hyped phrases that excited investors. Enron also traded on the promise of video streaming, which would fundamentally alter capacity demand. Enron was essentially right about that, if too early. 


Enron arguably was right about other things as well: edge data, interconnection points, content delivery networks and the massive change in global traffic entertainment video would bring. 


But it was wrong about immediate demand for bandwidth trading and streaming revenues, as well as the ability of partners to participate and trading platforms robust enough to handle such trades. 


Enron might also have missed the ability to use interconnection as a substitute for trading. These days, it is perhaps not so much capacity that is important as interconnection. And the domains that need to be interconnected are hyperscale app provider data centers and other data centers. 


As it turns out, the source of value is the interconnections, not the capacity as such, even if those two are related. 


Worldcom likewise grew on the back of a furious acquisition spree and ultimately fraudulent financial reporting as demand simply did not exist for the supply being built. 


Still, Enron was perhaps several decades ahead of the curve in wanting to create capacity trading mechanisms similar to energy trading.


Enron Broadband hoped to create a true trading platform for capacity, a business model where it would make nearly all its money from fees generated by trading, not sales of capacity, as was and remains the connectivity provider model. 


As a business model, that remains an essential foundation for any connectivity business model that is built on “being a platform.” Though the term gets thrown around casually, the platform business model is not the same as the use of the term “platform” in computing. 


For computing ecosystem participants, a platform is simply hardware or software upon which other software can run. By that definition, virtually every internet service provider is a “platform” upon which applications run. 


That does not mean ISPs have platform business models. In a platform business model, revenue is earned by facilitating transactions. Think Amazon or any other e-commerce platform, which enables buyers and sellers to conduct transactions. 


It remains to be seen whether the trading platform operations Enron Broadband envisioned will emerge. To the extent a “platform business model” requires such an exchange, it will have to do so. 


The point is that big frauds in the connectivity business or in any other business have happened at times of fervor over a “big new thing” such as the internet, video streaming or cryptocurrency. 


One has to separate the fraud from the fact and the future.


Value Add, or Core-Plus, Will Get More Attention in Digital Infra

Eventually, in virtually all phases of the computing, connectivity and software businesses, competition for any product or service eventually shifts to value add. The reason is simply that in highly-competitive markets, value-added benefits are one way to create distinctiveness while counteracting the pressure to compete on price.


Value-add also is a strategy used by firms to boost valuations. And we are likely to see more efforts in that regard in the digital infrastructure business. Investors call that a "Core-plus" strategy.


Even as digital infrastructure continues to gain a place in alternative asset portfolios built around infrastructure, the near term climate is challenging. 


In some parts of the digital infra investing business, the emphasis already has shifted to value creation, driven by near-term headwinds that put pressure on both financial returns and limit exit opportunities. 


Multiple compression also is slowing deal volume, as buyers and sellers cannot agree on valuations. 


source: BCG


Private equity markets have gotten tougher, squeezed by higher interest rates and inflation. That should apply to digital infrastructure as well, translating into fewer deals, smaller deals, some distress sales and more consolidation within the industry. Fewer exits also will happen, in some part because the initial public offering window has closed, eliminating a possible exit path. 


And, as always, rising interest rates have an inverse relationship to asset prices. Just as the costs of financing have risen, asset values have plunged along with financial returns. 


So it’s a buyer’s market, once sellers have adjusted to multiple compression and buyers have prepared for volatility. 

 

source: PwC


As happens with other markets, a shift in asset multiples leads to disagreements over valuation that mean fewer deals. Some believe reduced deal flow and assets under management for infrastructure could still grow by 2025, suggesting that a rough period is likely in store for 2023. As has already been the case, profits likely will be harder to come by, in the meantime. 


Value-creation mechanisms should differentiate above-average and average returns, says PwC.


Sunday, December 25, 2022

30% Lower FTTH Costs Change Payback Models

Some parts of the U.S. digital infrastructure market will get a boost from new federal funds to support fiber access networks. Thost subsidies might mean a reduction of capital investment to build new access networks of perhaps 30 percent. 


Such subsidies are part of a wider movement by internet service providers to reduce the capital expense of building advanced fixed access networks, and efforts by governments to incentivize deployment.


Co-investment, by definition, spreads risk and cost for any single investor, if it boosts customer adoption by single digits, in some cases. Fiber-to-home never has been an easy business case, and is dramatically more challenging in any market with at least two rivals.


For that reason, many investors (operators and financial entities alike) believe the optimal business case is for an owner of the first fiber-to-home network in any area, using a wholesale model that encourages most or all of the other contenders to lease capacity rather than build their own infrastructure.


For large operators in urban markets as well as small operators in rural markets, much depends on cost containment, as revenue increases are tough in competitive markets. But capex is the first hurdle.


According to some small community or co-op internet service providers, the total cost to build fiber-to-home systems in rural Vermont is about $26,000 per mile, including drops and customer installs for six customers per plant mile. 


Assuming 12 potential customers per plant mile, that implies a take rate of 50 percent by the end of third year of sales. Other studies suggest a per-mile cost closer to $56,000, with 22 potential customer locations per mile. 


That implies a per-location cost of about $2545, and a per-customer cost of $5091, with monthly revenues possibly in the $50 range. Assume take rates of 50 percent. 


If the free cash flow ratio is about 13 percent (after payroll, taxes and all other cash expenditures)--assuming any rural internet service provider has the same cash flow margin as a telco, capital can be obtained at five percent interest rates, cash flow increases three percent per year, there is never a payback on investment.  


Traditionally, that shortfall in rural areas has been covered by subsidies of one sort or another, plus rigid cost controls. 


If the cash flow ratio (not including operating expenses) is about 29 percent, payback is at least 21 years. Subsidies that effectively reduce invested capital expense by 30 percent help. 


In the above case, assume the ISP has a subsidy of 30 percent, resulting in per-location cost of $1782 and per-customer cost of $3564. Then the payback period drops to 16 years. 


Keep in mind, payback means that the owner only has reached breakeven on the cost of the network. 


A tier-one ISP able to leverage scale in its buying costs would fare better. Assume per-location investment of about $800 and per-customer cost (including drop cost and installation of $300 per location. Others might note that costs can range up to $600 per location) of about $1900.


Assume monthly recurring revenue of about $960, with the same 29 percent cash flow margin, borrowing costs of five percent and three percent annual revenue increases.  Assume a 50-percent take rate. 


If cash flow then is $278, the payback period (breakeven) is about six years. In practice, the payback period is probably a bit longer, as this analysis assumes the telco gets 50 percent take rates, while present take rates are in the 40-percent range. 


If one uses the 40-percent take rate, then per-location costs of $800 work out to about $2000 for customer capex, plus $300 for installation, for a total of about $2300 per customer. Then payback is about 7.5 years, based on cash flow. 


A large telco also has other upside, though, serving business accounts that boost average revenue per account. A large telco with its own mobile operations also benefits from the ability to use the fiber network to support its mobile operations as well. 


Of course, that is cash flow, not profit. Cash flow is working capital, not profit. But in many cases free cash flow margin is about equivalent to profit margin. 


Still, the point is that any new subsidies that lower upfront capital expense by 30 percent are going to positively affect the payback model. And the payback model supports profit generation. 


If an ISP can get funds from the government to defray as much as 30 percent of capex, many projects that might not have been undertaken would be feasible. 


Friday, December 23, 2022

AT&T Gigapower Joint Venture Raises Questions

The Gigapower joint venture between AT&T and BlackRock is one more illustration of how the local connectivity business model is evolving. First of all, the venture will operate on a wholesale basis outside AT&T’s core fixed network footprint. AT&T will be an anchor tenant on the network.


The open access network initially will target about 1.5 million locations outside the 21-state AT&T fixed network footprint. AT&T might not traditionally have been a fan of wholesale local access, but the capital requirements to build out networks in 29 states where it has no existing fixed network operations is daunting.  


Cost sharing appears to be the way AT&T has concluded it must operate to expand its own retail operations in those 29 states, as a fixed network services provider. \


T-Mobile also is reportedly looking at some form of joint venture to start building its own fixed network capabilities. Cable One also looks to use joint ventures to fund its own ISP footprint out of its current footprint. In the United Kingdom Virgin Media O2 likewise has chosen to create a joint venture to build new facilities out of its current footprint.  


Other service providers are taking other steps to boost capacity and internet access revenues outside their core region. Verizon is using fixed wireless for that purpose, as is T-Mobile, which historically has had zero fixed network assets able to provider customers with internet access. 


Many independent ISPs are building their own networks as well. The point is that huge amounts of capital are required to expand fiber-to-home networks and it no longer appears ISPs can do so by themselves. 


In the mobile segment of the business, though facilities-based competition has been the norm, there are some moves towards single-network patterns where wholesale access to a common platform is viewed as the only way, or the best way, to ensure rapid uptake of 5G and future mobile platforms. 


Difficult business models for facilities-based competition are part of that analysis. 


The growing joint venture movement in the fixed networks business also suggests a model change. At least where it comes to building out-of-region networks, full network ownership might not be viewed as the best strategy. But if the alternative is full wholesale, which might not be viewed so favorably, either, the alternative of owning some of the infrastructure might be viewed as a reasonable compromise. 


That hybrid approach--own some of what you need or sell--could be an important developing trend, compared to the alternative of “own 100 percent” of what you need or sell. Even if desired, competitive market dynamics might make that solution unobtainable. 


Business strategies that are more asset light have been proposed and considered for some time. In some markets, structural separation creating a wholesale-only model sets the ground rules. In the mobile segment of the business, asset disposals have become common, as mobile operators conclude they can monetize some of their infrastructure without sacrificing competitiveness.  


The broad issue is how far this reevaluation of asset value can go. It is one thing to spin off tower assets. It is another to use joint ventures to expand into new geographies. It might be quite something else to conclude that the actual access network provides so little value that it can be procured using wholesale mechanisms, and that network ownership confers less competitive advantage than it once did. 


It is too early to say a tipping point, in that regard, has been reached. Out of region, capital requirements are large enough that partial ownership might be the only alternative. 


In region, leading access providers still prefer to own their core access infrastructure. But change is happening. How much change is possible is the next question.


Thursday, December 22, 2022

In the End, Value Matters More than Physical Media

We often assume that people using fiber-to-home networks is mostly a matter of supply: build the networks and demand should be obvious. In other words, build it and they will come. It is rarely that simple. Even where FTTH is available, take rates are lower than many of us would have predicted. 


In many European markets as well as the United States, take rates approach 40 percent of holmes passed, and that only after a few years worth of marketing. Analysys Mason analysts estimated take rates at about 41 percent in Europe in 2020, for example.  


source: Analysys Mason 


AT&T has approached 40 percent take rates for its FTTH footprint in 2022. In some markets, including the United States, where cable operators have 70 percent of the installed base, and have, for two decades, gotten nearly all the net new account additions. FTTH purchases are lower because other viable options exist. 


The point is that customer demand also matters, not simply supply. Policymakers might well content themselves to drive policies that make FTTH available, or that make capacity available, irrespective of the platform. 


Rarely, if ever, are goals set for specific levels of adoption. That makes sense, if one assumes the government policy interest in precisely in ensuring availability of services, not forcing people to buy. It literally is up to internet service providers to make the case for value.  


source: Statista 


But physical media and platform statistics are only so valuable. What arguably matters more are actual value propositions, as only a small percentage of potential customers actually buy the fastest-available or most-expensive service packages. 


According to Openvault, even where gigabit-per-second service is widely available, about 15 percent of U.S. households actually buy it. Most customers buy service at lower speeds. As the headline speed increases to multi-gigabit ranges, that will likely remain the case. At first, only single digit percentages of customers will buy the fastest tier of  service. 


Most customers will continue to buy service someplace in the middle: not the slowest, nor the fattest speeds. 


At some point, where it comes to optical access infrastructure, we will stop focusing on availability and we will pay more attention to the actual services customers buy over that platform, or any other competing platforms. 


In the end, physical media will not matter. We will care about demand for capacity at various levels, and the value and revenue that generates.


Fateful Choices, Unforseen Outcomes

We often do not see clearly enough that when the global communications industry chose TCP/IP as its next-generation network architecture, it also chose a business model based on layers, permissionless app development and ownership, an open ecosystem rather than a closed pattern. 


That was perhaps not intentional. The bigger drivers were the relative cost and ease of operating a network using IP, rather than the rival asynchronous transfer mode network, its main rival as a solution. 


In doing so, connectivity providers decided all networks would be computing networks. But where is value created on a computing network? By users, at the edge, running applications deemed to provide utility, connecting buyers and sellers, trading partners, friends, relatives, colleagues. 


The network itself is a functional “dumb pipe,” supplying transport. That is essential, to be sure. But value lies in “ enabling connections to be made.” Beyond that, the network is not essential for computing functions, application development or deployment. All that happens “at the edge.”


That’s just another way of saying people, companies, institutions or groups can create value, use cases and apps that use communication networks without the permission of the network owners. That is the complete opposite of the case before IP networks were the norm. 


It was inevitable, if largely unforeseen, that most applications and network-delivered products could be, and would be, created and owned by third parties not under the ownership and control of connectivity providers. 


These days, nearly all communications over public and private networks are connectionless, a far cry from the situation four decades ago, when most sessions were connection oriented. That is another way of saying the way we communicate now uses packet switching.


source: TechTarget


Connection-oriented protocols were characteristic of the telephone voice network. Connectionless is the way modern data networking and internet protocol work. 


In that regard, and with passage of more than two decades, it might be worth noting that all global public communication networks operate on the same principles as computer networks: connectionless. 


Another way of saying this is to note that communications are essentially edge to edge, and not deterministic in the core transport network. We might also say such networks are essentially dumb, not “smart.”    


More than two decades ago, some debates were held on the merits of “smart” networks versus  stupid networks. A smart or “intelligent” network was touted by those who believed it best supported a range of applications where predictability and therefore quality assurance was important. 


The “dumb” network was preferred by data professionals, since it was the way computer communications occurred. Back then, the general framing was whether either IP or asynchronous transfer mode was the better choice for wide area and public network communications. 


Local area networks might have used routers or switches to connect LAN segments. So, as applied to the WAN, telcos proposed adding an ATM transport function, partly for quality assurance, partly for capacity. At the time, ATM could support faster speeds than IP could (though obviously that changed). 


 Data networking professionals asked “why?” Simplicity is more valuable than determinism, they argued.  But all those debates were over technology choices. 

source: Shanker presentation 


The debate seemingly over technology was much more than that, though perhaps little understood at the time. We might argue, persuasively, that the global IP networks are precisely “stupid” or “dumb” networks. In the IP framework, the transport network routes packets. 


In that sense, smart network elements are essential, and used. But all applications are created in an independent manner, even when directly owned and supplied by a connectivity provider. That is simply the way software works these days. 


Connectivity providers can build features that are more deterministic, such as layering on multi-protocol label switching to add more predictable transport quality. But so can edge devices owned by third parties or enterprises directly. 


Virtual private networks can be created to enhance security. But those features can be created either by transport providers or edge hardware and software owned directly by  enterprises. Service businesses can create such networks and make them available to consumers as well. 


Those entities can include--but are not limited to--transport service providers. Even now, most VPNs are created and sold by third parties, not transport service providers. And much of the activity by service providers are entities separate from the major transport providers. 


So layers, open, permissionless, connectionless, disaggregated, at the edge, over the top and dumb pipe are some of the common realities of the modern computing and connectivity functions and industries. The transport function ideally is transparent to all the other layers and functions. 


But that also has other implications. The value of “communications,” while remaining essential, arguably gets devalued, as sources of value move to the edge, and away from transport layers of the full solutions stack. 


More than two decades after a ferocious attack on the notion that the best network is a “dumb pipe,” isn’t that precisely what we now have, for the most part? 


Granted, one can still argue that service provider apps such as voice and text messaging or video entertainment or home security are applications that still require some amount of “intelligent” control in the “core” of the network.


But most of the bandwidth now carried on any network consists of internet or other Internet Protocol traffic, by definition edge-to-edge routed load. 


Even two decades ago, we might have all learned faster if the debate had been about connectionless versus connection-oriented networking. `We might have gotten a better grip on what a “layered” architecture  would mean for business models. 


We might have seen the “over the top” business model coming. We might have foreseen what “permissionless” ability to create products and reach users and customers directly would do for, and to, business models across the ecosystem. 


We might have gotten glimpses of new models for value creation and hence monetization. But what seems quite obvious in retrospect was anything but clear at the beginning.


Will Generative AI Follow Development Path of the Internet?

In many ways, the development of the internet provides a model for understanding how artificial intelligence will develop and create value. ...