Showing posts sorted by date for query technology adoption. Sort by relevance Show all posts
Showing posts sorted by date for query technology adoption. Sort by relevance Show all posts

Monday, November 18, 2024

AI and Quantum Change

Lots of people in their roles as retail investors are hearing lots about “artificial intelligence winners” these days, and much of the analysis is sound enough. There will be opportunities for firms and industries to benefit from AI growth. 


Even if relatively few of us invest at the angel round or are venture capitalists, most of us might also agree that AI seems a fruitful area for investment, from infrastructure (GPUs; GPU as a service; AI as a service; transport and data center capacity) to software. 


Likewise, most of us are, or expect soon to be, users of AI features in our e-commerce; social media; messaging; search; smartphone; PC and entertainment experiences.


Most of those experiences are going to be quite incremental and evolutionary in terms of benefit. Personalization will be more intensive and precise, for example. 


But we might not experience anything “disruptive” or “revolutionary” for some time. Instead, we’ll see small improvements in most things we already do. And then, at some point, we are likely to experience something really new, even if we cannot envision it, yet. 


Most of us are experientially used to the idea of “quantum change,”  a sudden, significant, and often transformative shift in a system, process, or state. Think of a tea kettle on a heated stove. As the temperature of the water rises, the water remains liquid. But at one point, the water changes state, and becomes steam.


Or think of water in an ice cube tray, being chilled in a freezer. For a long time, the water remains a liquid. But at some definable point, it changes state, and becomes a solid. 


That is probably how artificial intelligence will feature hundreds of evolutionary changes in apps and consumer experiences that will finally culminate in a qualitative change. 


In the history of computing, that “quantity becomes quality” process has been seen in part because new technologies reach a critical mass. Some might say these quantum-style changes result from “tipping points” where the value of some innovation triggers widespread usage. 


Early PCs in the 1970s and early 1980s were niche products, primarily for hobbyists, academics, and businesses. Not until user-friendly graphical interfaces were available did PCs seem to gain traction.


It might be hard to imagine, but GUIs that allow users to interact with devices using visual elements such as icons, buttons, windows, and menus, was a huge advance over command line interfaces. Pointing devices such as a  mouse, touchpad, or touch screen are far more intuitive for consumers than CLIs that require users to memorize and type commands.


In the early 1990s, the internet was mostly used by academics and technologists and was a text-based medium. The advent of the World Wide Web, graphical web browsers (such as  Netscape Navigator) and commercial internet service providers in the mid-1990s made the internet user-friendly and accessible to the general public.


Likewise, early smartphones (BlackBerry, PalmPilot) were primarily tools for business professionals, using keyboard interfaces and without easy internet access. The Apple iPhone, using a new “touch” interface, with full internet access, changed all that. 


The point is that what we are likely to see with AI implementations for mobile and other devices is an evolutionary accumulation of features with possibly one huge interface breakthrough or use case that adds so much value that most consumers will adopt it. 


What is less clear are the tipping point triggers. In the past, a valuable use case sometimes was the driver. In other cases it seems the intuitive interface was key. For smartphones it possibly was a combination of elegant interface; multiple-functions (internet access in the purse or pocket; camera replacement; watch replacement; PC replacement; plus voice and texting) 


The point is that it is hard to point to a single “tipping point” value that made smartphones a mass market product. While no single app universally drove adoption, several categories of apps--social media, messaging, navigation, games, utility and productivity-- all combined with an intuitive user interface, app stores and full internet access to make the smartphone a mass market product. 


Regarding consumer AI integrations across apps and devices, we might see a similar process. AI will be integrated in any evolutionary way across most consumer experiences. But then one particular crystallization event (use case, interface, form factor or something else) will be the trigger for mass adoption. 


The point is that underlying details of the infrastructure(operating systems, chipsets) do not drive end user adoption. What we tend to see is that some easy to use, valuable use case or value proposition suddenly emerges after a long period of gradual improvements. 


For a long time, we’ll be aware of incremental changes in how AI is applied to devices and apps. The changes will be useful but evolutionary. 


But, eventually, some crystallization event will occur, producing a qualitative change, as all the various capabilities are combined in some new way. 


“AI,” by itself, is not likely to spark a huge qualitative shift in consumer behavior or demand. Instead, a gradual accumulation of changes including AI will set the stage for something quite new to emerge.


That is not to deny the important changes in ways we find things, shop,  communicate, learn or play. For suppliers, it will matter whether AI displaces some amount of search; shifts retail volume or social media personalization. 


But users and consumers are unlikely to see disruptive new possibilities for some time, until ecosystems are more-fully built out and then some unexpected innovation finally creates a tipping point moment such as the “iPhone moment,” a transformative, game-changing event or innovation that disrupts an industry or fundamentally alters how people interact with technology, products, or services. 


It might be worth noting that such "iPhone moments" often involve combining pre-existing technologies in a novel way. The Tesla Model S, ChatGPT, Netflix, social media and search might be other examples. 


We’ll just have to keep watching.


Wednesday, November 6, 2024

We Might Have to Accept Some Degree of AI "Not Net Zero"

An argument can be made that artificial intelligence operations will consume vast quantities of electricity and water, as well as create lots of new e-waste. It's hard to argue with that premise. After all, any increase in human activity--including computing intensity--will have that impact.


Some purists might insist we must be carbon neutral or not do AI. Others of us might say we need to make the same sorts of trade offs we must make everyday, for all our activities that have some impact on water, energy consumption or production of e-waste.


We have to balance outcomes and impacts, benefits and costs, while working over time to minimize those impacts. Compromise, in other words.


Some of us would be unwilling to accept "net zero" outcomes if it requires poor people to remain poor; hungry people to remain hungry.


And not all of the increase in e-waste, energy or water consumption is entirely attributable to AI operations. Some portion of the AI-specific investment would have been made in any case to support the growth of demand for cloud computing. 


 So there is a “gross” versus “net” assessment to be made, for data center power, water and e-waste purposes resulting from AI operations. 


By definition, all computing hardware will eventually become “e-waste.” So use of more computing hardware implies more e-waste, no matter whether the use case is “AI” or just “cloud computing.” And we will certainly see more of both. 


Also, “circular economy” measures will certainly be employed to reduce the gross amount of e-waste for all servers. So we face a dynamic problem: more servers, perhaps faster server replacement cycles, more data centers and capacity, offset by circular economy efficiencies and hardware and software improvements. 


Study Name

Date

Publishing Venue

Key Conclusions

The E-waste Challenges of Generative Artificial Intelligence

2023

ResearchGate

Quantifies server requirements and e-waste generation of generative AI (GAI). Finds that GAI will grow rapidly, with potential for 16 million tons of cumulative waste by 2030. Calls for early adoption of circular economy measures.

Circular Economy Could Tackle Big Tech Gen-AI E-Waste

2023

EM360

Introduces a computational framework to quantify and explore ways of managing e-waste generated by large language models (LLMs). Estimates annual e-waste production could increase from 2.6 thousand metric tons in 2023 to 2.5 million metric tons per year by 2030. Suggests circular economy strategies could reduce e-waste generation by 16-86%.

AI has a looming e-waste problem

2023

The Echo

Estimates generative AI technology could produce 1.2-5.0 million tonnes of e-waste by 2030 without changes to regulation. Suggests circular economy practices could reduce this waste by 16-86%.

E-waste from generative artificial intelligence"

2024

Nature Computational Science

Predicts AI could generate 1.2-5.0 million metric tons of e-waste by 2030; suggests circular economy strategies could reduce this by up to 86%1

2

"AI and Compute"

2023

OpenAI (blog)

Discusses exponential growth in computing power used for AI training, implying potential e-waste increase, but doesn't quantify net impact

"The carbon footprint of machine learning training will plateau, then shrink"

2024

MIT Technology Review

Focuses on energy use rather than e-waste, but suggests efficiency improvements may offset some hardware demand growth


Thursday, October 24, 2024

High AI Capex is Worrisome, But "Winner Take All" is the Prize

It is not hard to find estimates of investment in U.S. artificial intelligence infrastructure (computing capabilities) in the range of $300 billion or more between 2023 and 2030. IDC analysts have suggested $300 billion in investments between 2023 and 2026.


Nor is it hard to find critics who worry about uncontrolled spending without a clear revenue model. On the other hand, leaders of firms attempting to become leaders in the generative AI model business are likely to keep in mind the “winner take all” dynamic we have seen in the recent internet era, where just one or a few firms emerged as leaders in new markets. 


They might point to:

  • Amazon's years of heavy investment to dominate e-commerce

  • Google's massive spending to establish search leadership

  • Cloud providers' huge datacenter investments

  • Meta's acquisition strategy in social media.


In fact, many markets show scant ability to support three providers, as the market leader has twice the share--and up to an order of magnitude more-share compared to  the number-two provider.


Market

Dominant Player

Market Share

Runner-up

Market Share

Search Engines

Google

91.9%

Bing

3.0%

Desktop Browsers

Chrome

65.72%

Safari

18.22%

Mobile Browsers

Chrome

66.17%

Safari

23.28%

Social Media

Facebook

2.9B users

YouTube

2.5B users

E-commerce

Amazon

37.8% (US)

Walmart

6.3% (US)

Video Streaming

YouTube

2.5B users

Netflix

231M subscribers

Music Streaming

Spotify

31%

Apple Music

15%

Ride-hailing (US)

Uber

68%

Lyft

32%

Cloud Services

AWS

32%

Azure

22%

Mobile OS

Android

71.8%

iOS

27.6%


So even if McKinsey estimates AI infrastructure spending will exceed $500 billion between 2023 and 2030, and even if many of those investments do nor produce the expected results, model suppliers have incentives to risk quite a lot, knowing that there is a small  prize for being second best. 


Gartner forecasts global AI infrastructure investments will surpass $250 billion annually by 2030. 


The OECD estimates investments in AI infrastructure across industries, will reach $1 trillion by 2030, across the OECD countries. Bloomberg predicts that the global AI infrastructure market will $700 billion by 2030.


On the other hand, most of that investment will be by end users and others in the value chain, not the generative AI model providers. 


And some estimates made in 2023 might be considered conservative in 2024. Morgan Stanley’s  "The Economics of AI” study, published in October 2023 suggested more than $200 billion in AI infrastructure investments by 2030, including:

  • Data centers: $125B

  • Networking infrastructure: $50B

  • Chip fabrication: $25B

  • Cooling systems: $10B.


Boston Consulting Group in December 2023 suggested there would be $235 billion cumulative investments in 

  • Data center buildout: 45%

  • Compute infrastructure: 35%

  • Power infrastructure: 20%. 


The Goldman Sachs "AI Infrastructure Report," published in September 2023 estimated $275 billion in  cumulative investment, including:

  • Semiconductor investment: $100B

  • Data centers: $115B

  • Power systems: $35B

  • Network upgrades: $25B. 


The caution, though, is that early estimates of the size of new technology markets often lead to overinvestment across the value chain. 


Study/Report

Date

Publisher

Key Conclusions

The Dot-Com Bubble Burst: Causes and Implications

2001

U.S. Securities and Exchange Commission (SEC)

Overinvestment in internet startups led to a speculative bubble that burst in 2000. Many companies were overvalued despite having no profitability.

Boom and Bust: The Telecommunications Investment Bubble

2002

Federal Reserve Bank of San Francisco

Overinvestment in telecom infrastructure during the late 1990s led to a major industry downturn, with unsustainable levels of capital spending.

The Case for Less Innovation

2017

Harvard Business Review

Many companies overinvest in unproven technologies without clear demand, resulting in failed projects and wasted resources.

Lessons from the Clean Tech Bubble

2016

MIT Energy Initiative

Overinvestment in cleantech (2005-2011) led to massive failures, with many companies being too early to market and receiving excessive venture capital.

Investing in Innovation: Creating a Research and Innovation Policy That Works

2010

The NESTA Foundation (UK)

Over-investment in R&D for new technologies can create inefficiencies and fail to produce proportional economic benefits if not managed strategically.

The Nanotechnology Investment Bubble

2005

Journal of Nanoparticle Research

Speculative investments in nanotechnology during the early 2000s led to unmet expectations, as many products were not commercially viable.

Unleashing Productivity: Overinvestment in Information Technology

2005

McKinsey Global Institute

Overinvestment in IT during the late 1990s and early 2000s did not yield expected productivity gains, with firms often adopting technology prematurely.

The Illusions of Overinvestment in AI

2021

Brookings Institution

Many companies overinvest in artificial intelligence without clear applications, leading to inflated expectations and unrealized returns.

The Biotechnology Bubble: When Science and Finance Collide

2004

Nature Biotechnology

Excessive capital flow into biotech during the 1990s led to overvaluation, with many firms failing to achieve meaningful breakthroughs.


In recent years we have also seen examples of overinvestment by many platform suppliers as well. 


Technology

Company/Industry

Year

Description of Over-Investment

Artificial Intelligence

IBM Watson

2011-2022

IBM invested billions in Watson AI for healthcare, but struggled to generate significant revenue and ultimately sold off the health assets

Virtual Reality

Meta (Facebook)

2014-present

Meta has invested over $36 billion in VR/AR technology with limited returns, facing skepticism about the metaverse vision

Blockchain

Various

2017-2018

Many companies rushed to invest in blockchain during the crypto boom, only to scale back or abandon projects when the hype died down

Autonomous Vehicles

Uber

2016-2020

Uber invested heavily in self-driving technology, spending over $1 billion before selling the unit after a fatal accident and regulatory challenges

3D Printing

3D Systems

2013-2015

The company aggressively acquired 3D printing startups, leading to over $1.3 billion in losses and a stock price crash when consumer adoption didn't materialize

Cloud Computing

HP

2011-2012

HP's $11 billion acquisition of Autonomy for cloud services led to an $8.8 billion write-down 


So the rationale for investing heavily to secure the leading position in the generative AI model business is a reflection of the possible “winner take all” character of application and platform markets, where the number-one provider dominates. 


And since market share and profit margin generally are related, the rewards for market leadership also are significant. In many capital-intensive markets, the profit margin of the top provider is double that of number two. 


And provider number two can have margins double that of provider number three.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...