Sunday, June 9, 2024

Will All PCs Eventually be AI PCs?

It’s arguably too early to be confident about market share forecasts for AI PCs. Will they eventually become the standard PC, as smartphones now are the standard phones purchased by consumers? Or will AI PCs remain specialist tools, to a greater or lesser degree?


The issue is less “ability to use AI” and more the issue of the value of local processing. 


One might liken the market prospects to that of consumer-grade or work-grade general-purpose PCs and “workstations” used by some, but not most. 


High-end workstations are used for 3D rendering, video editing, and complex simulations. They prioritize raw local processing power, high-performance graphics cards and large memory capacities. 


AI PCs will feature specialized hardware components like Tensor Processing Units or Neural Processing Units alongside traditional CPUs and GPUs to accelerate AI computations. But that does not necessarily speak to “why” such machines would add value over PCs not including those elements. 


At least for the moment, forecasters see a gradual shift of buying patterns. 


Year

AI PC Market Share (%)

General-Purpose PC Market Share (%)

Sources

2024 (estimated)

2-5

95-98

Canalys, Grand View Research

2025

5-10

90-95

Canalys

2026

8-15

85-92

Gartner

2027

12-20

80-88

Gartner

2030

20-30

70-80

IDC


AI PCs might be useful for developers or users working on AI training, inference and other AI-dependent workloads. For most consumers, that might include image processing, speech-to-text,  language translation or gaming. 


Some professionals who are AI developers, researchers or data scientists might have work-related reasons that make AI PCs a good choice, if local processing adds value, compared to remote processing. 


It is not clear how much of the video editing or 3D rendering market might be affected. “Professional” use cases might not be supported, but casual and user-generated content might be. 


There arguably is more debate about the PC market than the smartphone market, though. AI already makes general sense for image processing on smartphones as well as speech-to-text. But additional use cases requiring on-board processing will have to be developed. 


The argument for local AI processing on PCs is more complex. AI could personalize software functions, optimize battery usage, or enhance security measures. But it is not certain those tasks must be handled by on-board processors.


AI-specific hardware could significantly improve device performance for tasks such as photo and video editing, gaming, or augmented reality applications, to the extent those features are deemed useful on PCs. 


Battery life might be a constraint for smartphones and laptops, though. And, for most users, additional AI device cost will have to be balanced against “new” and valued use cases. It will take some time for those use cases to develop. 


At least in principle, one might envision a new category of AI PCs half way between workstations and general-purpose PCs. One might also envision an eventual migration of local AI processing to most PCs as a regular feature, at some point. 


Saturday, June 8, 2024

Are Large Language Models Investor "PIcks and Shovels" or Not?

Aside from all else, artificial reality Iis an investment theme. A study by Morgan Stanley, for example, argues that AI's materiality to investment theses has increased significantly, affecting at least 446 stocks, worth $15 trillion, in 2024. 


That is logical enough, given the importance of AI enabling technology including semiconductors, servers, cloud computing, data centers, energy sources for data centers; as well as AI implications for the functionality of enterprise and consumer software. 


In 2023 and 2024, much of that financial impact has centered on generative AI, eclipsing machine learning and natural language processing, for example, which already are used by many business and consumer applications. 


Facial recognition, for example, uses ML algorithms to unlock user smartphones, while digital voice assistants such as Siri and Alexa use AI, NLP and ML to understand commands and carry out a range of tasks. 


AI algorithms are used in e-commerce to make personalized shopping recommendations; in clinical trials to improve drug discovery and efficiency and elsewhere across an array of industries to automate a host of back-office tasks.


As often is the case, suppliers of “picks and shovels” were among early winners. Though we might quibble about what firms are in that category, many would say suppliers of graphics processor units, acceleration chips, memory, cloud computing suppliers and even electrical power companies are among firms and industries supplying “picks and shovels.”


There arguably is greater disagreement about whether large language models are enablers--and therefore in the “picks and shovels” category--or in the many other categories of beneficiaries of AI. 


Category

Industries/Firms

Description

AI Enablers (Picks & Shovels)

Chipmakers (Nvidia, AMD, Intel)

Cloud Computing Providers such as Amazon Web Services, Microsoft Azure, Google Cloud Platform)  

Data Labeling Companies , Labelbox, Scale AI)  OpenAI (research & development)

Electrical Utilities

Foundational infrastructure and tools needed to train and develop AI models in general. They don't necessarily focus on specific applications of AI.

Generative AI Beneficiaries

Content Creation (Adobe, Unity, Unreal Engine)  

Drug Discovery (Insilico Medicine, Generative Bio)  

Materials Science 

Marketing and Advertising (Anyword, Copy.ai)

Social Media

Search

Business Services

Law

Transportation

Retailing

Information Technology

Industries and firms leverage generative AI for specific applications. For example, generative AI can create new marketing copy, design elements, or discover new materials.


Large language models are where large amounts of disagreement could occur. Large language models can be viewed as both beneficiaries of AI and, to a lesser extent, enablers (picks and shovels). GPUs, memory, power, data centers and cloud computing enable AI to run. 


In that sense, AI platforms and apps are beneficiaries, not picks and shovels. LLMs are a product of advanced AI techniques such as deep learning and natural language processing, but are applications in their own right. And AI applications are not generally considered to be picks and shovels. 


Still, many could argue that LLMs are enablers to the extent they support creation of software features and applications. LLMs will power search, social media, many forms of app personalization, smartphone image processing, speech to text functions, text summarization, image processing, notetaking and so forth. 


So LLMs are where the distinction between AI “picks and shovels” enablers and and AI beneficiaries is mixed. 


Further muddling exists because some beneficiaries of AI also are developers of LLMs, and are revenue generators. Think of Microsoft’s Copilot, Google’s Gemini or other GenAI apps sold as subscriptions. GenAI is available both as a feature of Microsoft and Alphabet (Google) products as well as a subscription-based application. 


In some instances the LLM is an enabler or feature; in other cases an app. For some firms, GenAI and LLMs are both enablers (picks and shovels) and beneficiaries of capabilities and features offered by most products and processes. 


That matters for equity investors as well as all sorts of firms, industries and products.


Friday, June 7, 2024

Consumer "Internet Downtime" is Hard to Assess

It is nearly impossible to measure the amount of uptime or downtime any single consumer experiences with internet-based experiences on an annual basis, but we frequently see estimates of app availability, or network or device availability that separately look fairly reasonable, usually in minutes per year when a particular app is not available, for example. 


And that includes planned downtime for major software upgrades or maintenance, for example. And, generally speaking, most of us, most of the time, might agree that our internet-based app experiences are robust. We normally expect them to be there, and to work.


Value Chain Segment

Uptime (%)

Downtime (Minutes per Year)

Network Outages (Average ISP)

99.95%

29.2

Application Downtime (Average)

99.90%

52.56

Device Issues (Estimated)

99.50%

21.9

Total Value Chain Availability

Varies

Varies

Akamai State of the Internet/Connectivity Report

-

2880 - 5760 Minutes (2-4 days)

ThousandEyes

-

4380 - 7200 Minutes (3-5 days)


There are a few caveats. Downtime for planned outages might be greater than we assume, because we are sleeping when those planned outages happen. Also, some apps, networks or devices might be down, but not in use by any single user at any single time, so the “outages” are not experienced. 


In other words, if an outage happens to an app or service I am not using, I do not notice it. A thousand apps I do not use can have lots of outages; I'd never notice. Conversely, I am quite apt to notice an outage of my most-favorite and most-used apps. 


Also, end user experience is not simply a matter of app availability, but all other sources in series, including one’s devices, the internet access and transport networks, data center servers and local power, for example. 


Outages of some magnitude from all of those sources must be combined to derive a full picture of internet experience availability, across all experiences any single user has in a year. 


During a recent planned 48-hour local power outage, I could not use the internet in any AC-powered context. The apps, networks and devices might have been available, but local AC power was not available. Uptime for all the apps, devices and networks might have been quite high, yet my experience of “outage” happened anyhow. 


“Observability” therefore matters. Outages I do not encounter do not matter. Outages caused by local power outages do matter, even when there is no problem with the apps, devices or networks enabling internet experiences. 


The point is that the end user experience of internet-enabled experiences is conditional and cumulative; a function of what one does and doesn’t do, and when, across the full accumulated range of possible failure points.


"Apple Intelligence" is Coming

Apple's Worldwide Developers Conference (WWDC) this month should provide an indication of what Apple is working on in the generative AI area. Apple Intelligence is said to be the branding of Apple’s AI offerings. But it seems clear enough Apple will focus on on-board processing capabilities related to smartphone apps.


Given the importance of Siri, it would not come as a surprise to hear something about Siri AI features. Other areas where Apple’s on-board processing approach could support AI could include summarization features, photo editing or chatbot features. 


All that would make sense with the arrival of iOS 18, the next major update to the iPhone operating system. 


But lots of apps should get a boost, including:

  • Apple Music: Auto-generated playlists and smarter song transitions.

  • Apple News: AI-generated news article summaries.

  • Health: New features powered by AI

  • Keynote and Pages: AI-powered features for auto-generating slides in Keynote, writing faster in Pages

  • Mail: Incoming email categorization, and suggested replies to emails, as well as email thread summaries and text composition assistance.

  • Messages: Per-word effects, suggested replies, custom emoji, message recaps..

  • Notes: A built-in audio recording tool and audio transcriptions

  • Notifications: AI-generated notification summaries.

  • Photos: AI-powered photo retouching.

  • Safari: Browsing assistant that can summarize web pages, and a "Web Eraser" tool.

  • Spotlight: More intelligent search results and improved sorting.

  • Voice Memos: Audio transcriptions.


Though Apple is widely considered to be “behind” in generative AI leadership, that perception is likely misplaced. Recall Apple’s traditional approach to technology innovation: it rarely is the “first” to deploy any new technology. Instead, it has excelled at packaging new technology in better, more user-friendly or elegant ways. 


In fact, it would have been a shock had Apple emerged early as a generative AI leader. 


Where Apple should emerge as a force is on-device AI, given its leadership in devices and device functions, where AI already has been deployed to support smartphone operations related to imaging and cameras; user voice input; voice-to-text translation or facial recognition. 


Use Case

Description

Facial Recognition (Unlocking Phones)

Faster and more secure authentication compared to server-based verification.

Image/Video Processing (Filters, Editing)

Real-time filters and effects applied directly on the device, without needing to upload and download media files.

Voice Recognition (Offline Assistants)

Offline access to voice commands for basic tasks like setting alarms or making calls.

Sensor Data Analysis (Fitness Trackers)

Real-time processing of biometric data for personalized health insights and fitness coaching.

AR/VR Applications (Overlays, Interactions)

Enhanced responsiveness and lower latency for a more immersive augmented or virtual reality experience.


The advantages of on-the-device edge processing include latency performance, battery life improvements and privacy and security, as well as the ability to work when internet connectivity is lost. 

On-Device AI Processing Advantage

Value

Faster Response Times

No need to send data back and forth to the cloud, leading to quicker results, especially for real-time applications.

Lower Power Consumption

Processing data locally reduces reliance on network connectivity, saving battery life on mobile devices.

Improved Privacy, Security

User data stays on the device, minimizing privacy concerns and potential security risks associated with cloud storage.

Offline Functionality

Works even without an internet connection, essential for situations with limited access.


Thursday, June 6, 2024

How Big a Problem are Industry Revenue Growth Rates?

In most industries, it is probably safe to argue that under-par performance is the existential problem, not in-line performance. Executives don't get fired unless their outcomes are sub-par, compared to industry averages.


Is low connectivity service provider revenue growth a problem? It might seem obvious that it is a problem, but whether it is an existential problem is probably the better way to frame the question. Different industries have different growth rates, profit margins and roles in the value chain. Noting such differences might be highly useful for firm and industry strategy.


It might simply be unreasonable to expect traditionally-slow-growing industries to alter those patterns, just as we might be skeptical about firms in traditionally fast-growing industries that do not seem to exhibit the “industry standard” growth rates. 


The exception is if a given firm in a given industry is able to deploy or acquire assets in different parts of an industry value chain that have distinctly-different growth characteristics. That is the logic behind the “move up the stack” argument. 


As a management professor once advised us, “if you have a choice, choose a fast-growing industry.” The reason is that similar amounts of effort and skill (the same effort by a single individual in different settings) will produce different outcomes when applied to declining, slow-growing or fast-growing industries and firms. 


source: KPMG 


The point is that annual growth rates are a “problem” in any industry only when the trend worsens and growth slows over time. But that is not necessarily an issue management can fix, in any one company in any single industry. Over time, profit margins or growth rates in many industries have slowed, in part because of market saturation and competition. 


Indeed, one would be hard pressed to find an industry whose revenue growth rates have not declined over time. 


Industry Sector

Historical Average Growth Rate (%)

Projected Long-Term Growth Rate (%)

Technology

8-10%

5-7%

Healthcare

5-7%

4-6%

Consumer Staples

3-4%

2-3%

Consumer Discretionary

5-6%

3-5%

Financials

6-8%

3-5%

Industrials

4-6%

2-4%

Materials

5-7%

3-5%

Energy

4-6%

2-4%

Utilities

3-5%

2-4%

Telecommunications

5-7%

2-4%

Retail (except E-commerce)

2-4%

1-2%

E-commerce

10-15%

7-10%

Education

4-6%

3-5%


And with the caveat that different segments and firms might have different growth rates, industries with utility-like characteristics show the same slower revenue growth rates as seen in most other industries. 


Industry Sector

Historical Average Growth Rate (%)

Long-Term Growth Rate (%)

Telecommunications

5-7%

2-4%

Cable

4-6%

1-3%

ISP (Internet Service Providers)

6-8%

3-5%

Satellite Communications

8-10%

4-6%

Electric Utilities

3-5%

2-4%

Water Utilities

3-4%

2-3%


The point is that slow growth rates, or slower growth rates, are not necessarily an existential problem. Expected growth rates might simply reflect the near-universal slowing of industry growth rates in virtually all industries over time. 


And to the extent that utility-type industries and connectivity businesses traditionally have growth rates in the middle of all industries, continued “slow growth” is not unexpected, nor unusual, nor an imminent threat. 


That is simply the nature of the business. To be sure, not every provider in every segment has the same growth rate. But the reasons for such divergences are hard--if not impossible--to replicate. Younger firms tend to grow faster than older firms. Non-dominant firms sometimes get help from regulators to increase competition with dominant firms. Some segments of an industry grow faster than others. 


Sure, every executive would prefer faster growth rates over slower growth. But there are rational limits to how much that is subject to managerial skill.


No "One Size Fits All" for Generative AI

There is no “one size fits all” generative artificial intelligence strategy. Instead, successful innovations will build on existing supplier...