Tuesday, October 24, 2023

If Connectivity is Not the Answer, What Is?

Though internet of things connectivity and fixed broadband services will be important revenue drivers for the large enterprise and small and medium-sized enterprise (SME) segments over the next five years, information (software, apps, computing) products “will be the main driver of business revenue growth”  for telcos serving enterprise and SMB customers, according to analysts at Analysys Mason.  “Connectivity alone is unlikely to be enough to sustain a B2B operator.”


One might expect prices to increase in line with inflation, but that has not happened. Though inflation rates  peaked at between 12 percent and 15 percent in many Western European countries and at between seven percent and nine percent  in North America in 2022, in many cases average revenue per user has declined for business mobile services and business fixed data services, Analysys Mason argues. 


Part of the reason is long-term contracts signed with enterprise customers, which limit the ability to raise prices. 


The advice to “increase value” is to be expected. Precisely how that might be accomplished is always the issue. It might be noted that some revenue increases have been captured by others in the value chain, not connectivity providers. And some revenue sources, such as “private cloud,” might include large contributions from “connectivity” rather than “cloud computing” operations and services. 


Product

Providers

Revenue contribution (as a percentage of total firm revenue)

Private cloud computing

Verizon, AT&T, Deutsche Telekom

5-10%

Data center services

AT&T, Equinix, Digital Realty

3-5%

Unified communications and collaboration (UCC) solutions

Verizon, AT&T, Cisco

2-3%

Managed security services

Telefónica Tech, Verizon, AT&T

1-2%


If, as the Analysys Mason analysts argue “connectivity alone” is unlikely to be enough to sustain a B2B operator, the issue is how much value add can be created, and if that is not sufficient, how connectivity providers can add other lines of business. 


That has historically proven to be an extreme challenge, with a few exceptions. In fact, one might point out that mobile service itself, now driving 60 percent to 80 percent of global connectivity revenues, was itself driven by the search for new revenue sources. 


Aside from mobility services, internet access and home broadband have proven to be the most-significant revenue drivers in the consumer space, along with video subscriptions. 


Successful information technology initiatives have been very rare, if they can be said to exist at all. Mobile operators failed at becoming major mobile app stores. Most telcos have not had robust success at operating data centers, or becoming cloud computing suppliers. 


Mobile operators have hopes of creating a role in edge computing. A few mobile operators have been able to build substantial mobile money businesses. 


Decade

Initiatives

Example

1980s

Pagers, cellular service, data communications

AT&T, BellSouth, MCI

1990s

Internet access, long-distance service

America Online, Prodigy, AT&T WorldNet

2000s

Broadband internet, VoIP, video on demand

Verizon, Comcast, Time Warner Cable

2010s

Cloud computing, data centers, mobile apps

Amazon Web Services, Microsoft Azure, Google Cloud Platform

2020s

5G, Internet of Things 

AT&T, Verizon, T-Mobile


It probably is one thing to recommend “creating more value.” It is likely quite another matter to suggest connectivity providers can do so by entering new lines of business, even if that has been of huge importance with respect to connectivity products, mobility and internet access being the huge examples of transformative products. 


Mobile money and entertainment video have helped, especially in consumer customer segments. But some would say it remains to be seen how well telcos might fare as suppliers of other information technology products to business customers. 


Saturday, October 21, 2023

AI is Overhyped, But Nobody Wants to Bet Against It

Is AI overhyped? Certainly. But that is a different matter from expectations of huge value. It is only suggestive, but add up the 2023 estimated value of several key types of technology, and compare it to what might happen by 2030. The revenue upside noted in the studies shown above would be in the neighborhood of $10 trillion annually. 


Technology

Estimated revenue in 2023 (USD billions)

Estimated potential impact of AI on an annual basis by 2030 (USD billions)

Study name, date of publication, and venue

Personal computing

1.5 trillion

5.3 trillion

The Economic Impact of Artificial Intelligence: Evidence from Patent Data (2021, Science)

Cloud computing

500 billion

1.2 trillion

The AI Economy: Opportunity, Risk, and Renewal (2021, McKinsey Global Institute)

Internet e-commerce

5.5 trillion

10 trillion

The Future of Commerce: How the Rise of AI is Changing the Way We Shop (2022, Forrester Research)

Search

200 billion

300 billion

The Impact of AI on the Global Search Market (2023, Gartner)

Social media

150 billion

250 billion

The AI Social Media Revolution (2023, eMarketer)


Apparently, nobody yet has suggested that artificial intelligence will have negative productivity effects; will fail to produce innovation, new firms, new products or enhance the value of existing industries, firms and products.


It's Hard to Quantify LLM Costs

Virtually everyone believes artificial intelligence use cases are going to drive important changes in employment, work processes, applications, use cases, processing operations, power consumption and data center requirements, though precisely how much change will occur, and when, remains unclear. 


More practically, firms and entities are having to estimate how much it will cost to create generative AI models and then draw inferences from those models. 


The answers, inevitably, are that “it depends” on what one wishes to accomplish, using which engines, which compute platform, scraping how much data, how much customization for a generic model is required, the number of users of the model; the complexity of the tasks that the model supports; the amount of data that is needed to train the model and the cost of computing resources. 


Context length, which determines the amount of information the LLM can consider when formulating an output, also affects pricing. if a generative AI model has a context length of 10, it will consider the 10 previous words when generating the next word.


The context length of ChatGPT-4 is 8,192 tokens for the 8K variant and 32,768 tokens for the 24K variant. This means that ChatGPT-4 can consider up to 8,192 or 32,768 previous words when generating the next word, depending on the variant. 


The cost for using the GPT-4 8K context model API is about $0.03 per 1,000 tokens for input and $0.06 per 1,000 tokens for output. 


Using the 32K context model, the cost is $0.06 per 1,000 tokens for input and $0.12 per 1,000 tokens for output.  


Applications

GenAI Costs

Studies

Marketing

$10,000 - $100,000

Gartner

Sales

$100,000 - $1 million

Forrester

Customer Service

$1 million - $10 million

McKinsey

Product Development

$10 million - $100 million

PwC


And the cost of building a model, offered as a platform, is not the same as the cost for an entity to use that model, when offered as a subscription; a pay-per-use model or bundled as a feature. 


Certainly, everyone expects model building, training and customization costs to come down over time. But the costs appear to be significant, whether enterprises choose to build using their in-house resources or use a cloud computing “as a service” provider. 


Business size

Cost of building generative AI model on-premises

Cost of building generative AI model on the cloud

Fortune 500

$10 million - $100 million

$5 million - $50 million

Mid-market

$1 million - $10 million

$500,000 - $5 million

Small business

$100,000 - $1 million

$50,000 - $100,000


The costs of building generic models will likely, over time, mostly be the province of LLM platform suppliers, as few entities will have the financial resources to build and train proprietary models. 


Cost estimate

Key assumptions

Study name

Date of publication

Publishing venue

$10M-$100M

100B parameters, trained on 100TB of text data, using 1,000 GPUs for 1 month


2022

OpenAI

$1B-$10B

1T parameters, trained on 1T TB of text data, using 10,000 GPUs for 1 year


2023

Google AI

$10B-$100B

10T parameters, trained on 10T TB of text data, using 100,000 GPUs for 10 years


2024

Microsoft AI

$10 million

175B parameter model, trained on 100TB of text data using 1024 GPUs for 1 month

"The Cost of Training a Large Language Model" by Brown et al.

2020

arXiv

$100 million

1 trillion parameter model, trained on 100PB of text data using 10,240 GPUs for 1 month

"Scaling Laws for Neural Language Models" by Chen et al.

2020

arXiv

$1 billion

10 trillion parameter model, trained on 10EB of text data using 100,240 GPUs for 1 month

"The Cost of Training a Large Language Model" by Webber

2023

Forbes

$1 billion

100 trillion parameters, 1 million GPUs

"The Cost of Large Language Models: A Scaling Law Analysis"

2022

Nature


For most entities, the relevant cost question will be “how much will it cost to use an existing platform,” including the cost of adapting (customizing) a generic model for a particular enterprise or entity. 


For example, costs of generating inferences when using "as a service" providers are based on the number of tokens. A generative AI token is a unit of text or code that is used by a generative AI model to generate new text or code. Generative AI tokens can be as small as a single character or as large as a word or phrase.


As a simplified rule, the number of tokens can be likened to the number of words in a generated response, for example. 


OpenAI offers a variety of generative AI models as a service through its API. Licensing costs range from $0.00025 to $0.006 per 1000 tokens for inference.


Google AI Platform offers a variety of generative AI models as a service through its Vertex AI platform. Licensing costs range from $0.005 to $0.02 per 1000 tokens for inference.


Microsoft Azure offers a variety of generative AI models as a service through its Azure Cognitive Services platform. Licensing costs range from $0.005 to $0.02 per 1000 tokens for inference.


Cost estimate

Key assumptions

Study name

Date of publication

Publishing venue

$0.006 per 1000 tokens

Inference on a single GPU

"Pricing Large Language Models as a Service"

2022

arXiv

$0.02 per 1000 tokens

Inference on multiple GPUs

"The Economics of Large Language Models"

2023

Medium

$0.05 per 1000 tokens

Inference on a TPU

"Comparing the Cost of Different Hardware Platforms for Large Language Models"

2023

arXiv

$0.02 per 1,000 tokens

GPT-3.5 model

"The Economics of Large Language Models"

2023

Medium

$0.10 per 1,000 tokens

GPT-4 (8K) model

"The Cost of Large Language Models: A Scaling Law Analysis"

2022

Nature

$0.40 per 1,000 tokens

GPT-4 (32K) model

"The Cost of Large Language Models: A Scaling Law Analysis"

2022

Nature


The point is that the cost of deploying generative AI for any particular business function is highly variable at the moment.


Will Video Content Industry Survive AI?

Virtually nobody in business ever wants to say that an industry or firm transition from an older business model to a newer model is doomed t...