Showing posts sorted by date for query interconnection. Sort by relevance Show all posts
Showing posts sorted by date for query interconnection. Sort by relevance Show all posts

Wednesday, November 22, 2023

Is "Fair Share" Really Necessary?

Ignoring for a moment the arguments about network interconnection principles and existing policies that have internet domains compensating each other for unequal traffic flows, do telcos really “need” so-called “fair share” payments by a few hyperscale app providers?


Nobody likely disputes the challenge of monetizing continual investments in capacity, on either mobile or fixed networks. In competitive markets, payback is a challenge. But even so, the industry’s own data suggests there is not an urgent business model problem. 


Industry sources might argue that profit margins and revenue growth rates are lower for mobile and fixed network telcos than in the average of all other industries.


According to GSMA Intelligence, the average net profit margin for telcos globally was 14.1 percent in 2022, lower than the average net profit margin for all industries, which was 16.9 percent. Likewise, GSMA Intelligence says the average revenue growth rate for telcos globally was 2.2 percent in 2022, lower than the average revenue growth rate for all industries, which was 4.2 percent.


Critics might simply point out that the telecom service provider business always was a slow-growth, utility-like industry. So low growth rates are not new, nor a surprise. Lower profit margins than “average” also are not a surprise. Each industry has a different growth rate. 


And capital-intensive industries, whether generally considered utilities or not, generally have lower profit margins. 


Industry

Revenue Growth Rate (2022)

Capital Intensity

Telecommunications

2.2%

High

Electrical Power

3.4%

Very High

Natural Gas

3.8%

High

Wastewater

2.9%

Medium

Airlines

2.1%

High

Railroads

1.9%

Very High

Shipping

2.5%

Very High


So yes, connectivity service provider revenue growth rates are low.  But so are growth rates for other capital-intensive industries. Generally speaking, industries with less capital intensity also tend to grow faster. 


Industry

Revenue Growth Rate (2022)

Technology

6.5%

Healthcare

5.2%

Financial Services

4.8%

Consumer Discretionary

4.3%

Consumer Staples

3.9%

Industrials

3.6%

Energy

3.4%

Utilities

3.2%

Materials

3.0%

Real Estate

2.8%

Telecommunications

2.2%


Also, with the caveat that growth rates and profit margins can vary substantially between suppliers in different segments of the market, profit margins are not unusually low for service providers in any region. 


Slow revenue growth, as noted previously, has been--and remains--characteristic of telecom services, as is generally true for many other capital-intensive industries. 


Region

Telco Net Profit Margin

Revenue Growth Rate

North America

12.2%

1.8%

Europe

13.5%

1.9%

Asia

15.6%

2.5%

Latin America

12.8%

2.1%

Africa

9.3%

1.7%


Tuesday, October 31, 2023

How Will Most Firms Recover AI Usage Costs?

Since most users of cloud-based software-as-a-service services already are accustomed to pricing based substantially on usage, generative AI might represent few new pricing issues. If their customers use more, they will pay more.


For cloud computing suppliers, there are at the moment perhaps fewer issues about how to charge for usage than about how to create the high-performance compute fabric AI in general and large language models in particular require. 


Entirely different issues confront most other firms that must figure out how to price AI capabilities incorporated into their products. For most firms, AI is a new cost that must be recovered somehow in retail prices charged to customers.


Uncertainty about levels of usage is one variable. But there is no uncertainty about product costs when AI features are based on usage of "cloud computing as a service." Inference operations are going to require cloud computing usage, each time an inference operation is invoked.


How to recover the costs of paying for cloud compute therefore is a new question to be answered.


For most firms that will want to use large language models (generative AI), the big issue is how to recover the cost of LLM features used by their customers. So far, the most-common models are:

  • AI is a feature of a higher-priced version of the existing product (higher-cost plan versus standard)

  • AI is a value-added feature with an extra flat-fee cost

  • AI is a feature of an existing product for which there is no direct incremental charge to the user (such as a search or social media or messaging user), but might eventually represent a higher cost to customers (advertisers or marketers buying advertising, for example)

  • AI is a no-charge feature of an existing product, but with usage limits (freemium)

  • AI is a new product with charges that are largely usage-based (GenAI “as a service” offered by infrastructure-as-a-service providers). 


And some software firms might use a few of those models. For example, Microsoft charges for its AI-assisted copilots, including those in Office and GitHub, with prices ranging from $5 to $40 per user per month. 


But some copilots are included with certain enterprise subscriptions, while a number of Microsoft's consumer AI services remain free for now.


Other software product suppliers also must grapple with how to recover costs of supporting AI features used by their customers.


Box includes AI features for business customers subscribed to its Enterprise-Plus tier and above. Each user will have access to 20 queries per month, with 2,000 additional queries available on a company level. Additional usage will require further payment.


Adobe is including  "generative credits" with its various Creative Cloud, Express and Firefly services.  Starting November 2023, Adobe will offer additional credits using a subscription plan, with plans starting at $4.99 per month for 100 credits. 


“Usage” seems to be the area where there is most danger for retailers, who must make key assumptions about the value of AI when embedded into core products, as well as the cost recovery mechanism when suppliers are not yet sure about how much their customers will use the AI features. 


The key danger will be underestimating usage, unless usage is part of the customer AI pricing formula. 


In a market where retail customers use their own hardware, that would not be an issue. 


But in a market reliant on cloud computing, where retail customers use the supplier’s cloud computing resources, usage really does matter, whenever the supplier is in turn paying a cloud services vendor for compute. 


A few hyperscale cloud computing firms (Microsoft, Google, Facebook) will be somewhat protected, as they can use their own infrastructure. But most enterprises will have to pay retail rates for computing services, so volume does matter. 


Although”compute as a service” suppliers are going to face customer pushback as AI compute loads and charges mount up, at least they tend to be protected as most of their services are usage based. Customers who use more, pay more. 


Businesses that buy “compute as a service” will have to take usage into account. 


Some of those “customer usage and customer pricing” issues might be reminiscent of issues connectivity providers faced in the past as core product “usage-based pricing” shifted. 


Though both flat-fee and usage-based pricing was common in the era where voice was the dominant product, flat fee has been the bigger trend for internet access, interconnection and transport. Within some limits, internet access, for example, tends to be “flat fee” based. 


That poses key issues as the volume of usage climbs, but revenue does not. One can see this in network capital investment, for example, where network architects must assume perpetual 20-percent (or higher) increases in usage every year. 


In some ways, suppliers that embed AI into their products are going to face similar problems. Though cloud computing suppliers will still largely be able to employ usage mechanisms, many retailers of other products are as-yet unclear about how much usage will eventually happen.


That, in turn, means they are as-yet unsure about long-term cost recovery mechanisms and retail AI pricing. 


Flat-fee pricing will be the simplest solution for the moment, and likely the least-objectionable method from a customer standpoint. Whether that continues to work so well in the future is the issue, if AI inference operations grow in volume as much as some might suspect. 


It will be difficult for most firms to sustain low flat-fee rates as volume escalates. The exceptions are those handful of firms that own their cloud compute infrastructure (Microsoft, Google, Facebook, Amazon, for example). 


Of course, some of those sorts of firms will be able to justify “no fee to use” as well, since they have commerce or advertising revenues supporting many of their core products. That is a luxury few firms will experience.


AI usage is going to be a big issue for most firms. So is the issue of how to recover costs related to supplying that usage.

Saturday, October 28, 2023

Net Neutrality and "Fair Share" are Flatly Incompatible and Contradictory

One might argue that neither network neutrality nor “fair share” payments by a few hyperscale app providers make sense. 


Net neutrality is the principle that internet service providers (ISPs) should treat all data on the internet equally, regardless of the source, destination, or type of content.Without net neutrality, proponents argue, ISPs could block or slow down access to certain websites or services, or charge consumers higher fees for accessing certain content.


“Fair share” is the concept that a few popular app providers should pay telcos a fee for using their infrastructure. You see the contradiction. “Fair share,” by definition, treats bits differently and allows ISPs to charge fees to some sources. 


Note that the principles are mutually exclusive: treating all bits the same--irrespective of source--means no “fair share” payments are allowed. 


Beyond that, even when net neutrality rules are in place, ISPs are allowed to groom traffic during times of congestion. But net neutrality does not prevent ISPs from practicing traffic shaping or congestion control, which do not treat all bits equally. 


Also, keep in mind that ISPs and internet domains already compensate each other for asymmetrical traffic flows, in the form of  interconnection payments.


Critics might note that internet domains--including the targeted hyperscale firms--already pay such fees for traffic asymmetry, even ignoring the fact that it is ISP customers themselves who are asking the hyperscalers to send data to them. 


Hyperscale App Provider

ISP

Interconnection Payment

Netflix

Comcast

$1 billion

Netflix

Verizon

$750 million

Amazon Web Services

Comcast

$1.2 billion

Amazon Web Services

Verizon

$900 million

Microsoft Azure

Comcast

$1 billion

Microsoft Azure

Verizon

$750 million

Google Cloud

Comcast

$800 million

Google Cloud

Verizon

$600 million

Microsoft Azure

AT&T

$75 million per year

Alphabet

Charter

$100 million per year

Amazon

AT&T

$150 million per year

Microsoft

Charter

$75 million per year

Google

AT&T

$125 million per year

Meta

Charter

$50 million per year

Meta

AT&T

$75 million per year

Alphabet

China Telecom

$150 million per year

Amazon

NTT

$125 million per year

Microsoft

Deutsche Telekom

$100 million per year

Google

Telefónica

$75 million per year

Meta

Singtel

$50 million per year

Meta

Orange

$75 million per year


Some might argue that “fair share” essentially is an effort to recreate the old closed network, where every app on the network had to be approved by the access service provider.

AT&T Intros "Turbo" QoS Features for Mobile Customers

AT&T has introduced quality of service features for its 5G service, intended to offer a more-consistent access experience for gaming, s...