Sunday, December 4, 2022

Why Low Market Share Can Doom an Access Provider or Data Center

There is a reason access providers with double the share of their closest competitor lead their markets. 


Profit margin almost always is related to market share or installed base, at least in part because scale advantages can be obtained. Most of us would intuitively suspect that higher share would be correlated with higher profits. 


That is true in the connectivity and data center markets as well. 

source: Harvard Business Review 


But researchers also argue that market share leads to market power that also makes leaders less susceptible to price predation from competitors. There also is an argument that the firms with largest shares also outperform because they have better management talent. PIMS researchers might argue that better management leads to outperformance. Others might argue the outperformance attracts better managers, or at least those perceived to be “better.”


Without a doubt, firms with larger market shares are able to vertically integrate to a greater degree. Apple, Google, Meta and AWS can create their own chipsets, build their own servers, run their own logistics networks. 

source: Slideserve 


The largest firms also have bargaining power over their suppliers. They also may be able to be more efficient with marketing processes and spending. Firms with large share can use mass media more effectively than firms with small share.


Firms with larger share can afford to build specialized sales forces for particular product lines or customers, where smaller firms are less able to do so. Firms with larger share also arguably benefit from brand awareness and preferences that lessen the need to advertise or market as heavily as lesser-known and smaller brands with less share. 


Firms with higher share arguably also are able to develop products with multiple positionings in the market, including premium products with higher sales prices and profit margins. 


source: Contextnet 


That noted, the association between higher share and higher profit is stronger in industries selling products purchased infrequently. The relationship between market share and profit is less strong for firms and industries selling frequently-purchased, lower-value, lower-priced products where the risk of buying alternate brands poses low risk. 


The relationships tend to hold in markets where firms are spending to gain share; where they are mostly focused on keeping share or where they are harvesting products that are late in their product life cycles. 

source: Harvard Business Review 


The adage that nobody gets fired for buying IBM” or Cisco or any other “safe” product in any industry is an example of that phenomenon for high-value, expensive and more mission-critical products. 


For grocery shoppers, house brands provide an example of what probably drives the lower relationship between share and profit for regularly-purchased items. Many such products are actually or nearly commodities where brand value helps, but does not necessarily ensure high profit margins. 


On the other hand, in industries with few buyers--such as national defense products--profit margin can be more compressed than in industries with highly-fragmented buyer bases. 


Studies such as the Profit Impact of Market Strategies (PIMS) have been looking at this for many decades. PIMS is a comprehensive, long-term study of the performance of strategic business units  in thousands of companies in all major industries. 


The PIMS project began at General Electric in the mid-1960s. It was continued at Harvard University in the early 1970s, then was taken over by the Strategic Planning Institute (SPI) in 1975. 


Over time, markets tend to consolidate, and they tend to consolidate because market share is related fairly directly to profitability. 


One rule of thumb some of us use is that the profits earned by a contestant with 40-percent market share is at least double that of a provider with 20-percent share.


And profits earned by a contestant with 20--percent share are at least double the profits of a contestant with 10-percent market share.


This chart shows that for connectivity service providers, market share and profit margin are related. Ignoring market entry issues, the firms with higher share have higher profit margin. Firms with the lowest share have the lowest margins. 

source: Techeconomy  


In facilities-based access markets, there is a reason a rule of thumb is that a contestant must achieve market share of no less than 20 percent to survive. Access is a capital-intensive business with high break-even requirements. 


At 20 percent share, a network is earning revenue from only one in five locations passed. Other competitors are getting the rest. At 40 percent share, a supplier has paying customers at four out of 10 locations passed by the network. 


That allows the high fixed costs to be borne by a vastly-larger number of customers. That, in turn, means significantly lower infrastructure cost per customer.


Saturday, December 3, 2022

Interconnection Regulation on Cusp of Major Change?

“Who” should be covered by common carrier regulation relating to network interconnection has gotten murkier in the internet era, as have many other older concepts. We used to leave “data services” largely unregulated. Telcos were highly regulated, though less so these days than in the past. 


Debates about “sending party pays” policies for interconnection are an example. In the past, only retail-facing telcos were subject to clear interconnection obligations, Internet domains interconnect on a voluntary basis. 


But “sending party pays” rules extend those obligations to new parties: a few hyperscale content or app providers. That moves beyond public common carrier interconnection and towards rules for internet domains with no obligations to serve the public. 


Such arguments also implicitly raise issues about which regulatory regime ought to hold: common carrier or data networks;  internet or "telecom


All arguments about universal service and now access network infrastructure upgrades necessarily entail rules about who should pay. To an extent, the answer could be “shareholders.” More often the answer is “customers” but sometimes the answer is “business partners.” 


The point is that universal service or network upgrades typically entail some contribution by all customers of communication networks; payments by service providers (whether they pay or simply collect is an issue), financial support from all taxpayers or shareholders. In some cases, debt holders can wind up paying, especially if a firm goes into voluntary bankruptcy. 


Basically, the argument that a handful of hyperscale app providers should make payments to access providers is of the “make business partners pay” argument, even if, in practice, there are no formal business relationships between app providers and ISPs. 


ISPs do have such direct relationships with other internet domains and connectivity providers. Adding a few hyperscale app providers to the interconnection framework essentially treats those hyperscalers as though they were “carriers.” Historically, such agreements were between “telcos” and other public communications service providers. 


Perhaps the difference now is that both ISPs and content domains now interconnect--directly or indirectly--to the internet fabric. So one way of framing the “sending party pays” discussion about internet traffic is to view the issue as a new form of debate over interconnection. 


source: Wayback Machine 


The idea is not unprecedented. Internet domains have used both settlement-free peering and transit fees when setting up business relationships. Peering is easy to justify when traffic flows are roughly equivalent. 


It is harder to accomplish when traffic flows are unequal. And that is among the problems with proposals to charge a few hyperscale app providers for unequal traffic exchange volumes.


The bigger question is whether rules for common carriers should be extended to data service providers. In the past, the answer has unequivocably been "no." But that seems to be changing.


Will Social Media Free Speech Hasn't Be Brought Within the "First Amendment" Orbit?

The shift from analog to digital for all forms of content and communications raises questions we have not had to think about, namely the misfit between traditional regulatory and legal norms with internet-based “everything.”


The tensions are broad. Where most people do not have to think about the difference between “common carrier” or “utility” regulation and realms of life that should not be so regulated, most people seem aware of thorny issues related to media and social media.


Some issues relate to civility, but that necessarily has implications for content moderation. And content moderation requires some imposition of “values” that have potential “freedom of thought” or “freedom of speech” consequences, even if the stated view is simply to control spam, rude and obnoxious or threatening speech. 


source: freespeechhistory.com 


Without content moderation, spam can overrun sites, for example. Without enforcement of civility rules, sites become dangerous places where bullying happens, not just rudeness. 


Worse, even where freedom of speech is guaranteed by law, such laws only protect private actors from government action. The U.S. First Amendment to the Constitution only bars action by government entities. The protections actually were not meant to constrict what private entities might publish or say.  


So it is not so clear--or easy--to apply the desire for free thought and speech in a practical way to private actors who cannot be compelled to do so. Some might note that a comprehensive theory of free speech protections as applied to government has not been developed. 


A growing concern in some quarters is how freedom of expression is protected not from government action but by the actions of platforms. Indeed, some call for greater restriction of free speech on platforms, in the name of so-called hate speech. Others say the restrictions are not equally applied to all speech, and result in the suppression of some political ideas. 


If we assume that the purpose of the First Amendment is to protect freedom of expression in a democratic society, then new media formats and new platforms can raise new issues. And, as is common, the matter is complicated. 


The First Amendment has generally been interpreted to protect the rights of “speakers. But the owners of new platforms (social media, in particular) say their users are the “speakers,” not the platforms. Even if jurists wished to extend some First Amendment protections beyond “government” entities, legal concepts would have to come to a decision on who the “speaker” is, to protect the speaker’s rights. 


In other words, are the users of a platform the speakers, the platform itself, or some combination. Worse, is it the speakers or the audience whose “free thought” rights are to be respected?


Traditionally, citizens are to be protected from government restriction of free speech. 


But the places where “speech” occurs also matter. Public forums--such as public parks and sidewalks--have always been viewed as places where citizens have the right of free speech. 


Nonpublic forums are places where the right of free speech can be limited. Examples are airport terminals, a public school’s internal mail system or polling places. 


In between are limited public forums, where similar restrictions on speech are lawful, especially when applied to classes of speakers. However, the government is still prohibited from engaging in viewpoint discrimination, assuming the class is allowed. 


The government may, for example, limit access to public school meeting rooms to school-related activities. The government may not, however, exclude speakers from a religious group simply because they intend to express religious views, so long as they are in a permitted class of users. 


Those protections have been limited to state action, It is government entities (local, state, or federal) that are enjoined from infringing the right of free speech. Protections have not been deemed applicable to private entities.


There has generally been in other words, no First Amendment right of free speech enforceable on private firms or persons, with some exceptions. 


Common carriers--such as telcos--must allow communications between any users who are willing to pay the tariffs. Telcos cannot censor what those users say. Such regulation--including public accommodation, water and electrical utilities or railroads--is not generally regarded as a direct “free speech” issue, but an issue of commerce.


A common carrier is a person or company that transports goods or people for a fee, the principle being non-discrimination. A common carrier must provide its service to anyone willing to pay its fee, unless it has legitimate grounds for refusal.


If state governments decide to create laws protecting free speech from social media or other private firms, that would at the very least raise an issue: Can the federal government, acting under the guise of the First Amendment, move to restrict state action extending the zone of free speech to include dominant private platforms? 


That might involve a novel regulation of social media platforms as common carriers of a sort. As voice service providers are not, as a rule, allowed to censor what their customers and users may say, so platforms might be barred from such censorship as well. 


That would clearly be a principle that it is the users--not the platform--which has “free speech” protections, as it relates to posted content. Platforms would not surrender their political rights as entities. 


That would plow new ground, but First Amendment law has evolved over the years in an ad hoc way, all along. It would be a contentious argument, to be sure. In the case of social media platforms, we would have to decide who the speaker is, to determine whose rights are to be protected. 


Alternatively, some cases have essentially concluded that it is the audience--the listeners--whose rights are to be protected. That has happened mostly with radio and TV broadcasting, and with cable TV regulation to any extent. But that is arguably not the general principle. 


Generally, courts have decided it is “speakers” whose rights are to be protected. There are caveats. It has generally been the owners of assets whose rights are protected, in a practical sense: printing press owners, early on; then magazine or newspaper publishers; then radio broadcasters; TV broadcasters; then cable companies and networks.


Social media has not yet been addressed. But issues seem to be mounting. And that generally leads to court cases, which leads to Supreme Court action, which might set new precedents.


Thursday, December 1, 2022

Fibonacci, Pareto and the Connectivity Business

Some mathematical ratios reoccur so often they are applied in nature and business. Fibonacci provides an example. “The Fibonacci sequence is a famous group of numbers beginning with 0 and 1 in which each number is the sum of the two before it. It begins 0, 1, 1, 2, 3, 5, 8, 13, 21 and continues infinitely,” Smithsonian magazine says.


Fibonacci sequences drive the Golden Ratio which applies to mollusk shells, sunflower florets, and rose petals to the shape of the galaxy. In financial markets Fibonacci is used by technical traders.


“If you divide the female bees by the male bees in any given hive, you will get a number near 1.618,” notes Investopedia.  “The golden ratio also appears in the arts and rectangles whose dimensions are based on the golden ratio appear at the Parthenon in Athens and the Great Pyramid in Giza.” 


Others note Fibonacci sequences also apply to human anatomy

source: Smithsonian 


The Pareto theorem also occurs often in life and business. Most of us are familiar with the 80/20 rule, which suggests that roughly 80 percent of value or outcomes are generated by about 20 percent of actions. Formally, it is the Pareto theorem


We also tend to see Pareto distributions in global connectivity provider revenue, though the pattern is clearer when looking at net profit rather than gross revenue, for example. As a rule, profits are driven by business accounts rather than consumer accounts, for example; urban areas rather than rural areas; dense parts of cities more than suburbs; some product lines rather than others.  

source: Techeconomy 


The traditional rule for fixed networks is that service providers made money in urban areas; broke even in the suburbs and lost money in rural areas. That arguably remains true for mobile networks as well. 


If usage is a measure of implied profit, then mobile operators might earn as much as half their “revenue” from about 10 percent of sites. Perhaps a total of 30 percent of all cell sites handle 80 percent of traffic, and hence, revenue. 


Another way to think about it is any single user’s usage. For any single user, perhaps half of all usage occurs in just one macrocell. About 80 percent of usage happens in three cells. About 20 percent of usage happens in 28 additional cells. Again, we see a Pareto style distribution: just four cells handle 80 percent of any single user’s traffic. 


source: T-Mobile


One way of possibly using Pareto is ownership of cell sites versus leasing capacity. A competitive supplier--such as a cable operator--might conclude it is best to own the sites where half to 80 percent of usage happens. That might include the home coverage and work site coverage, which are fixed usage locations. 


That is especially true if a cable operator can use its own existing network to support such cell sites. For the 20 percent of usage that happens when people are out and about, it makes sense simply to buy wholesale capacity. 


All service providers essentially try to do this when segmenting their customer bases. If most of the profit comes from  one or just a few customer segments, it makes sense to focus on those segments. The segmentation can be geographic, customer type, customer volume; product line or demographic or psychographic. 


The point is simply that mathematical patterns exist in the business. 


"Access" Remains 80% of Total Connectivity Network Cost

Terrestrial connectivity networks always require as much as 80 percent of total invested capital in the access portion of the network, not the long haul or other facilities. For mobile operators and internet service providers, nearly all the capital cost lies in the access network (exclusive of spectrum investments, which would skew the ratio even further in the direction of access as the cost driver). 


The same holds true for operating costs, where perhaps 85 percent of total operating costs lies in the access network. 


Some focus only on WAN costs and interconnection, especially when arguing that “ISP prices are too high,” but the real costs for any ISP lie in the access network. Cost per gigabyte of transferred data might be low, but that is a relatively insignificant cost of doing business for ISPs operating in denser markets. 


As always, interconnection and other transport costs are a higher cost item for ISPs in rural areas, with low subscriber density and distance from the nearest internet onramp (interconnection) point. 


That also appears to be the case for energy consumption, as most of the active devices operate in the access plant. A 4G or 5G network, for example, might consume 73 percent of total energy in the radio access network. 


The wide area core network might consume about 13 percent of total energy, while data centers and servers might consume nine percent of total energy. 

source: ENEA 


The broader observation is that access is the expensive part of a connectivity network, whether looking at capital investment or operating expense.


Tuesday, November 29, 2022

When Price Comparison Shopping Rules, So Do Hidden Fees

Though sellers have good reasons for using hidden fees, customers generally dislike the, since total price is greater than top-line advertised price.  Though annoying for buyers, there are reasons for such tactics. 


In competitive markets where price comparison engines operate,  price obviously is among the key differentiators. Where search engines rank products by price, it always makes sense to advertise the lowest-possible retail price, if competitors do so. 


Mandating all-inclusive pricing by government edict is one way of improving transparency while not putting any single supplier at a disadvantage. 


In the absence of such mandates,  “below the line” price add-ons are simply a requirement for sales processes highly dependent on automated search mechanisms. Buyers may not like it, but so long as price comparison engines get used, lead price matters. Advertised prices always have mattered, but arguably matter more now that price comparisons made online are such an important part of consumer buying behavior. 


In fact, such additional fees have represented as much as a quarter of total costs for some bundled-service packages in recent years (video plus internet access, for example). Equipment rental charges or other “cost recovery” fees often are included in total recurring prices. 


source: Consumer Reports 


One might think it irrational for firms to willingly compete on such a basis, but there is upside to balance price pressure downside: greater reach. Allowing price comparison engines to use retail pricing data does lead to price competition. But it also provides any single retailer greater prospect reach. As always, top competitors want to be available at retail where the top rivals also are present.   


Of course, service providers often levy fees of various types, below the line, for precisely the same reason. We might disagree with the practice, or the reasons for the fees, but suppliers are responding to the implications of price increases in an era when price discovery is easier than ever. Add-on fees are an “easy” way to maintain to-line posted prices while actually increasing them. 


When inflation exists, prices must be raised, since input prices climb. Consumers rarely claim to “like” their video providers or the many additional fees they see on their bills. But “broadcast TV” fees are a reflection of the rising prices video distributors must pay station owners for the right to carry their programing. 


The same logic applies to sports programming costs that keep rising annually as well. Since 1983, for example, the cost to carry National Football League programming has increased at an eight percent compound annual growth rate. Subscription TV suppliers are distributors: they do not generally own the content they bundle for their customers. So when those costs rise, they must be passed on to customers, just as energy and transportation, storage or marketing costs must ultimately be passed on to customers. 


Few--if any--businesses could survive eight-percent annual cost increases for an essential input to their retail products without adjusting prices.


Sunday, November 27, 2022

Digital Infra is Anything but Supercore

Digital infrastructure investing, like alternative asset investing generally, now has moved to embrace core plus concepts. The nomenclature comes from the traditional real estate investment concepts used by buyers of alternative assets. 


Some might say “core” assets are akin to Class A office buildings, which are the most prestigious in a city. Class B buildings might be those which offer fewer prestige value than class A, and might be located outside a downtown core area. Class C buildings will tend to be aimed at more industrial or service oriented businesses and smaller businesses. 

source: Bullpen


It might be more accurate to characterize the different classes of assets by risk and return profiles, as there is less the notion of “prestige” as in office space, and more the fact that digital infra is a newer class of alternative or real estate assets, albeit with generally higher exposure to economic cycles and less business moat protection. 


As the range of infrastructure assets has grown, some now see more gradations of risk, adding “supercore” as a category of regulated assets, to distinguish between other assets that are not formally rate or price regulated. 


By definition, digital infra assets would not be considered supercore, as data center, cell tower, fiber network and other “telecom” network assets are unregulated in terms of pricing and do not offer guaranteed rates of return. 


In a private equity or other institutional investment context, “core” infrastructure  has meant gas or electricity utilities that have business moats, offer predictive cash flow and are relatively resistant to economic fluctuations. These businesses also tend to be rate regulated, with low capital appreciation, longer asset holding cycles and lower yield. 


That profile fits requirements of large pension funds and other institutional investors including sovereign wealth funds, for example. 


“Core plus” adds new classes of assets such as airports, seaports, roads that have more exposure to economic cycles but higher capital gain potential, balanced by potentially less predictable cash flow. 


Data centers, specialized fiber networks, edge computing facilities, fiber-to-home networks, towers and hosting facilities are considered “value add” assets. Capital gain potential is higher, assets holding periods might be shorter and yield is expected to be  lower. 


“Opportunistic” assets might be digital infra in developing markets or  distressed assets. 

source: Mercer


Yet others will use a related taxonomy using “super core” (core); core (what others might call core plus) and core plus (what others might call value add). 


As always, people will disagree about the boundaries between asset classes or the placement of specific assets within a class. Some argue that “core” infrastructure includes assets which are primarily income-producing. That view groups toll roads, bridges and hospitals in the same grouping with gas and electrical utilities. 


Also, some now view “core infrastructure” as including mobile towers, cloud storage and data centers. In that typology, some would say core plus/value add is a different category. 


The classification scheme matters to the extent that it guides investment thinking about what is core and what is “core plus'' or “value add.” 


The obvious areas of disagreement are that some assets viewed as “core plus” by some will be considered “core” by others. In a digital infra context, that means deciding whether data centers, tower networks and fiber access networks are “core” or “core plus.” 


The same definitional issue is whether those sorts of assets are “core plus” or “value add.” Perhaps nobody is going to confuse those categories with “opportunistic” investments which involve some level of asset distress or uncertainty. 


Digital infra, as viewed by many, has become an “essential” sort of infrastructure akin to roads, airports, seaports, electricity and natural gas supply. If so, then cell towers, fiber networks and data centers are “core” assets. So “core plus” and “value add” are the adjacencies to be explored, aside from the occasional “opportunistic” play.  


As a practical matter, digital infra investors tend to view those assets as core. Historically, infrastructure investors have looked for investments that:

  • are real, capex-intensive assets 

  • are essential services 

  • offer steady and stable returns

  • Are economic cycle protected 

  • provide cash yields

  • have barriers to entry

  • are typically within energy, telecom or transport.


Most observers would likely agree that “core plus” means any additional new asset classes or assets with less utility-like characteristics. That might mean assets with shorter-term contracts, no rate regulation and some exposure to economic cycles, though generally regarded as “essential” facilities and functions. 


With the emergence of “supercore” asset baskets that consist exclusively of assets with regulated pricing, it seems logical to include data centers, tower networks, edge computing facilities or fiber networks as “core.”


Will Video Content Industry Survive AI?

Virtually nobody in business ever wants to say that an industry or firm transition from an older business model to a newer model is doomed t...