Monday, December 26, 2022

Value Add, or Core-Plus, Will Get More Attention in Digital Infra

Eventually, in virtually all phases of the computing, connectivity and software businesses, competition for any product or service eventually shifts to value add. The reason is simply that in highly-competitive markets, value-added benefits are one way to create distinctiveness while counteracting the pressure to compete on price.


Value-add also is a strategy used by firms to boost valuations. And we are likely to see more efforts in that regard in the digital infrastructure business. Investors call that a "Core-plus" strategy.


Even as digital infrastructure continues to gain a place in alternative asset portfolios built around infrastructure, the near term climate is challenging. 


In some parts of the digital infra investing business, the emphasis already has shifted to value creation, driven by near-term headwinds that put pressure on both financial returns and limit exit opportunities. 


Multiple compression also is slowing deal volume, as buyers and sellers cannot agree on valuations. 


source: BCG


Private equity markets have gotten tougher, squeezed by higher interest rates and inflation. That should apply to digital infrastructure as well, translating into fewer deals, smaller deals, some distress sales and more consolidation within the industry. Fewer exits also will happen, in some part because the initial public offering window has closed, eliminating a possible exit path. 


And, as always, rising interest rates have an inverse relationship to asset prices. Just as the costs of financing have risen, asset values have plunged along with financial returns. 


So it’s a buyer’s market, once sellers have adjusted to multiple compression and buyers have prepared for volatility. 

 

source: PwC


As happens with other markets, a shift in asset multiples leads to disagreements over valuation that mean fewer deals. Some believe reduced deal flow and assets under management for infrastructure could still grow by 2025, suggesting that a rough period is likely in store for 2023. As has already been the case, profits likely will be harder to come by, in the meantime. 


Value-creation mechanisms should differentiate above-average and average returns, says PwC.


Sunday, December 25, 2022

30% Lower FTTH Costs Change Payback Models

Some parts of the U.S. digital infrastructure market will get a boost from new federal funds to support fiber access networks. Thost subsidies might mean a reduction of capital investment to build new access networks of perhaps 30 percent. 


Such subsidies are part of a wider movement by internet service providers to reduce the capital expense of building advanced fixed access networks, and efforts by governments to incentivize deployment.


Co-investment, by definition, spreads risk and cost for any single investor, if it boosts customer adoption by single digits, in some cases. Fiber-to-home never has been an easy business case, and is dramatically more challenging in any market with at least two rivals.


For that reason, many investors (operators and financial entities alike) believe the optimal business case is for an owner of the first fiber-to-home network in any area, using a wholesale model that encourages most or all of the other contenders to lease capacity rather than build their own infrastructure.


For large operators in urban markets as well as small operators in rural markets, much depends on cost containment, as revenue increases are tough in competitive markets. But capex is the first hurdle.


According to some small community or co-op internet service providers, the total cost to build fiber-to-home systems in rural Vermont is about $26,000 per mile, including drops and customer installs for six customers per plant mile. 


Assuming 12 potential customers per plant mile, that implies a take rate of 50 percent by the end of third year of sales. Other studies suggest a per-mile cost closer to $56,000, with 22 potential customer locations per mile. 


That implies a per-location cost of about $2545, and a per-customer cost of $5091, with monthly revenues possibly in the $50 range. Assume take rates of 50 percent. 


If the free cash flow ratio is about 13 percent (after payroll, taxes and all other cash expenditures)--assuming any rural internet service provider has the same cash flow margin as a telco, capital can be obtained at five percent interest rates, cash flow increases three percent per year, there is never a payback on investment.  


Traditionally, that shortfall in rural areas has been covered by subsidies of one sort or another, plus rigid cost controls. 


If the cash flow ratio (not including operating expenses) is about 29 percent, payback is at least 21 years. Subsidies that effectively reduce invested capital expense by 30 percent help. 


In the above case, assume the ISP has a subsidy of 30 percent, resulting in per-location cost of $1782 and per-customer cost of $3564. Then the payback period drops to 16 years. 


Keep in mind, payback means that the owner only has reached breakeven on the cost of the network. 


A tier-one ISP able to leverage scale in its buying costs would fare better. Assume per-location investment of about $800 and per-customer cost (including drop cost and installation of $300 per location. Others might note that costs can range up to $600 per location) of about $1900.


Assume monthly recurring revenue of about $960, with the same 29 percent cash flow margin, borrowing costs of five percent and three percent annual revenue increases.  Assume a 50-percent take rate. 


If cash flow then is $278, the payback period (breakeven) is about six years. In practice, the payback period is probably a bit longer, as this analysis assumes the telco gets 50 percent take rates, while present take rates are in the 40-percent range. 


If one uses the 40-percent take rate, then per-location costs of $800 work out to about $2000 for customer capex, plus $300 for installation, for a total of about $2300 per customer. Then payback is about 7.5 years, based on cash flow. 


A large telco also has other upside, though, serving business accounts that boost average revenue per account. A large telco with its own mobile operations also benefits from the ability to use the fiber network to support its mobile operations as well. 


Of course, that is cash flow, not profit. Cash flow is working capital, not profit. But in many cases free cash flow margin is about equivalent to profit margin. 


Still, the point is that any new subsidies that lower upfront capital expense by 30 percent are going to positively affect the payback model. And the payback model supports profit generation. 


If an ISP can get funds from the government to defray as much as 30 percent of capex, many projects that might not have been undertaken would be feasible. 


Friday, December 23, 2022

AT&T Gigapower Joint Venture Raises Questions

The Gigapower joint venture between AT&T and BlackRock is one more illustration of how the local connectivity business model is evolving. First of all, the venture will operate on a wholesale basis outside AT&T’s core fixed network footprint. AT&T will be an anchor tenant on the network.


The open access network initially will target about 1.5 million locations outside the 21-state AT&T fixed network footprint. AT&T might not traditionally have been a fan of wholesale local access, but the capital requirements to build out networks in 29 states where it has no existing fixed network operations is daunting.  


Cost sharing appears to be the way AT&T has concluded it must operate to expand its own retail operations in those 29 states, as a fixed network services provider. \


T-Mobile also is reportedly looking at some form of joint venture to start building its own fixed network capabilities. Cable One also looks to use joint ventures to fund its own ISP footprint out of its current footprint. In the United Kingdom Virgin Media O2 likewise has chosen to create a joint venture to build new facilities out of its current footprint.  


Other service providers are taking other steps to boost capacity and internet access revenues outside their core region. Verizon is using fixed wireless for that purpose, as is T-Mobile, which historically has had zero fixed network assets able to provider customers with internet access. 


Many independent ISPs are building their own networks as well. The point is that huge amounts of capital are required to expand fiber-to-home networks and it no longer appears ISPs can do so by themselves. 


In the mobile segment of the business, though facilities-based competition has been the norm, there are some moves towards single-network patterns where wholesale access to a common platform is viewed as the only way, or the best way, to ensure rapid uptake of 5G and future mobile platforms. 


Difficult business models for facilities-based competition are part of that analysis. 


The growing joint venture movement in the fixed networks business also suggests a model change. At least where it comes to building out-of-region networks, full network ownership might not be viewed as the best strategy. But if the alternative is full wholesale, which might not be viewed so favorably, either, the alternative of owning some of the infrastructure might be viewed as a reasonable compromise. 


That hybrid approach--own some of what you need or sell--could be an important developing trend, compared to the alternative of “own 100 percent” of what you need or sell. Even if desired, competitive market dynamics might make that solution unobtainable. 


Business strategies that are more asset light have been proposed and considered for some time. In some markets, structural separation creating a wholesale-only model sets the ground rules. In the mobile segment of the business, asset disposals have become common, as mobile operators conclude they can monetize some of their infrastructure without sacrificing competitiveness.  


The broad issue is how far this reevaluation of asset value can go. It is one thing to spin off tower assets. It is another to use joint ventures to expand into new geographies. It might be quite something else to conclude that the actual access network provides so little value that it can be procured using wholesale mechanisms, and that network ownership confers less competitive advantage than it once did. 


It is too early to say a tipping point, in that regard, has been reached. Out of region, capital requirements are large enough that partial ownership might be the only alternative. 


In region, leading access providers still prefer to own their core access infrastructure. But change is happening. How much change is possible is the next question.


Thursday, December 22, 2022

In the End, Value Matters More than Physical Media

We often assume that people using fiber-to-home networks is mostly a matter of supply: build the networks and demand should be obvious. In other words, build it and they will come. It is rarely that simple. Even where FTTH is available, take rates are lower than many of us would have predicted. 


In many European markets as well as the United States, take rates approach 40 percent of holmes passed, and that only after a few years worth of marketing. Analysys Mason analysts estimated take rates at about 41 percent in Europe in 2020, for example.  


source: Analysys Mason 


AT&T has approached 40 percent take rates for its FTTH footprint in 2022. In some markets, including the United States, where cable operators have 70 percent of the installed base, and have, for two decades, gotten nearly all the net new account additions. FTTH purchases are lower because other viable options exist. 


The point is that customer demand also matters, not simply supply. Policymakers might well content themselves to drive policies that make FTTH available, or that make capacity available, irrespective of the platform. 


Rarely, if ever, are goals set for specific levels of adoption. That makes sense, if one assumes the government policy interest in precisely in ensuring availability of services, not forcing people to buy. It literally is up to internet service providers to make the case for value.  


source: Statista 


But physical media and platform statistics are only so valuable. What arguably matters more are actual value propositions, as only a small percentage of potential customers actually buy the fastest-available or most-expensive service packages. 


According to Openvault, even where gigabit-per-second service is widely available, about 15 percent of U.S. households actually buy it. Most customers buy service at lower speeds. As the headline speed increases to multi-gigabit ranges, that will likely remain the case. At first, only single digit percentages of customers will buy the fastest tier of  service. 


Most customers will continue to buy service someplace in the middle: not the slowest, nor the fattest speeds. 


At some point, where it comes to optical access infrastructure, we will stop focusing on availability and we will pay more attention to the actual services customers buy over that platform, or any other competing platforms. 


In the end, physical media will not matter. We will care about demand for capacity at various levels, and the value and revenue that generates.


Fateful Choices, Unforseen Outcomes

We often do not see clearly enough that when the global communications industry chose TCP/IP as its next-generation network architecture, it also chose a business model based on layers, permissionless app development and ownership, an open ecosystem rather than a closed pattern. 


That was perhaps not intentional. The bigger drivers were the relative cost and ease of operating a network using IP, rather than the rival asynchronous transfer mode network, its main rival as a solution. 


In doing so, connectivity providers decided all networks would be computing networks. But where is value created on a computing network? By users, at the edge, running applications deemed to provide utility, connecting buyers and sellers, trading partners, friends, relatives, colleagues. 


The network itself is a functional “dumb pipe,” supplying transport. That is essential, to be sure. But value lies in “ enabling connections to be made.” Beyond that, the network is not essential for computing functions, application development or deployment. All that happens “at the edge.”


That’s just another way of saying people, companies, institutions or groups can create value, use cases and apps that use communication networks without the permission of the network owners. That is the complete opposite of the case before IP networks were the norm. 


It was inevitable, if largely unforeseen, that most applications and network-delivered products could be, and would be, created and owned by third parties not under the ownership and control of connectivity providers. 


These days, nearly all communications over public and private networks are connectionless, a far cry from the situation four decades ago, when most sessions were connection oriented. That is another way of saying the way we communicate now uses packet switching.


source: TechTarget


Connection-oriented protocols were characteristic of the telephone voice network. Connectionless is the way modern data networking and internet protocol work. 


In that regard, and with passage of more than two decades, it might be worth noting that all global public communication networks operate on the same principles as computer networks: connectionless. 


Another way of saying this is to note that communications are essentially edge to edge, and not deterministic in the core transport network. We might also say such networks are essentially dumb, not “smart.”    


More than two decades ago, some debates were held on the merits of “smart” networks versus  stupid networks. A smart or “intelligent” network was touted by those who believed it best supported a range of applications where predictability and therefore quality assurance was important. 


The “dumb” network was preferred by data professionals, since it was the way computer communications occurred. Back then, the general framing was whether either IP or asynchronous transfer mode was the better choice for wide area and public network communications. 


Local area networks might have used routers or switches to connect LAN segments. So, as applied to the WAN, telcos proposed adding an ATM transport function, partly for quality assurance, partly for capacity. At the time, ATM could support faster speeds than IP could (though obviously that changed). 


 Data networking professionals asked “why?” Simplicity is more valuable than determinism, they argued.  But all those debates were over technology choices. 

source: Shanker presentation 


The debate seemingly over technology was much more than that, though perhaps little understood at the time. We might argue, persuasively, that the global IP networks are precisely “stupid” or “dumb” networks. In the IP framework, the transport network routes packets. 


In that sense, smart network elements are essential, and used. But all applications are created in an independent manner, even when directly owned and supplied by a connectivity provider. That is simply the way software works these days. 


Connectivity providers can build features that are more deterministic, such as layering on multi-protocol label switching to add more predictable transport quality. But so can edge devices owned by third parties or enterprises directly. 


Virtual private networks can be created to enhance security. But those features can be created either by transport providers or edge hardware and software owned directly by  enterprises. Service businesses can create such networks and make them available to consumers as well. 


Those entities can include--but are not limited to--transport service providers. Even now, most VPNs are created and sold by third parties, not transport service providers. And much of the activity by service providers are entities separate from the major transport providers. 


So layers, open, permissionless, connectionless, disaggregated, at the edge, over the top and dumb pipe are some of the common realities of the modern computing and connectivity functions and industries. The transport function ideally is transparent to all the other layers and functions. 


But that also has other implications. The value of “communications,” while remaining essential, arguably gets devalued, as sources of value move to the edge, and away from transport layers of the full solutions stack. 


More than two decades after a ferocious attack on the notion that the best network is a “dumb pipe,” isn’t that precisely what we now have, for the most part? 


Granted, one can still argue that service provider apps such as voice and text messaging or video entertainment or home security are applications that still require some amount of “intelligent” control in the “core” of the network.


But most of the bandwidth now carried on any network consists of internet or other Internet Protocol traffic, by definition edge-to-edge routed load. 


Even two decades ago, we might have all learned faster if the debate had been about connectionless versus connection-oriented networking. `We might have gotten a better grip on what a “layered” architecture  would mean for business models. 


We might have seen the “over the top” business model coming. We might have foreseen what “permissionless” ability to create products and reach users and customers directly would do for, and to, business models across the ecosystem. 


We might have gotten glimpses of new models for value creation and hence monetization. But what seems quite obvious in retrospect was anything but clear at the beginning.


Monday, December 19, 2022

Altnets Dispute Openreach Discounts

Openreach discounts for new fiber access contracts, which essentially offer wholesale customers lower prices, are viewed as a competitive threat by facilities-based competitors of Openreach. The new proposed tariffs, for example, only offer discounts in areas where there is competition from rival facilities-based fiber access providers. 


Where Openreach is the sole provider of optical fiber access, the program and discounts do not apply. Some might say that is a typical response by a dominant provider to maintain or gain market share. 


Competitors might see it as a way to drive competitors out of business using price mechanisms. At some point, that might be viewed as predatory behavior by regulators, as Openreach supports perhaps 600 retail ISPs. Up to this point, Virgin Media 02 has been the main facilities-based rival to Openreach, having perhaps 20 percent of the installed base. Up to 75 percent of the retail home broadband connections use Openreach. 


source: Ofcom  


Other small facilities-based ISPs have the most to lose. The big problem with building a rival access network are the stranded assets. If any provider in a competitive market manages to get 20 percent share, that also means 80 percent of the locations generate no revenue. As a general rule of thumb, 30 percent share is likely a lower boundary for sustainability in a competitive, facilities-based access market. 


Openreach could make that a difficult target to reach, in markets where Virgin Media o2 also operate, alongside Openreach. 


Saturday, December 17, 2022

Marginal Cost Pricing and "Near Zero Pricing" are Correlated

Digital content and related businesses such as data transport and access often face profitability issues because of marginal cost pricing, in a broad sense. Marginal cost pricing is the practice setting the price of a product to equal the extra cost of producing an extra unit of output.


Of course, digital goods are prime examples. What is the additional cost of delivering one more song, one more text message, one more email, one more web page, one more megabyte, one more voice conversation? What is the marginal cost of one more compute cycle, one more gigabyte of storage, one more transaction? 


Note that entities often use marginal cost pricing during recessions or in highly-competitive markets where price matters. Marginal cost pricing also happens in zero-sum markets, where a unit sold by one supplier must come at the expense of some other supplier.


In essence, marginal cost pricing is the underlying theory behind the offering of discounts. Once a production line is started, once a network or product is built, there often is little additional cost to sell the next unit. If marginal cost is $1, and retail price is $2, any sale above $1 represents a net profit gain. 


Of course, price wars often result, changing expected market prices in a downward direction. 


But marginal cost pricing has a flaw: it only recovers the cost of producing the next unit. It does not aid in the recovery of sunk and capital costs. Sustainable entities must recoup their full costs, including capital and other sunk costs, not simply their cost to produce and sell one more unit. 


So the core problem with pricing at marginal cost (the cost to produce the next unit), or close to it, is that the actual recovery of sunk costs does not happen. Sometimes we are tempted to think the problem is commoditization, or low perceived value, and that also can be an issue.


One arguably sees this problem in wide area data transport and internet transit pricing, for example. 


Software suppliers have an advantage, compared to producers of physical products, as the marginal costs to replicate one more instance are quite low, compared to the cost of adding another production line or facility; the cost of building additional access networks or devices. \\A company that is looking to maximize its profits will produce “up to the point where marginal cost equals marginal revenue.” In a business with economies of scale, increasing scale tends to reduce marginal costs. Digital businesses, in particular, have marginal costs quite close to zero.


source: Praxtime


The practical result is a drop in retail pricing, such as music streaming average revenue per account, mobile service average revenue per user, the cost of phone calls, sending text, email or multimedia messages, the cost of videoconferencing or price of products sold on exchanges and marketplaces. 


Of course, there are other drivers of cost, and therefore pricing. Marketing, advertising, power costs, transportation and logistics, personnel and other overhead do matter as well. But most of those are essentially sunk costs. Many of those costs do not change as one incremental unit is produced and sold. 


Which is why some argue the price of digital goods tends toward zero, considering only production costs. Most of the other price drivers for digital goods might not be related directly to production cost, however. Competition, brand investments and bargaining power can be big influences as well. 


Still, marginal cost pricing is a major reason why digital goods prices can face huge pricing pressure.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...