Tuesday, November 19, 2024

How Will AI Capex Affect Software Startups?

Given the impact cloud computing has had on software startup capital investment costs, it might be reasonable to speculate about the impact artificial intelligence might have on startup capex or operational expense. 


Clearly, cloud computing has slashed computing infrastructure capital investment requirements for software-based startups. 


Study Name

Date

Publisher

Key Conclusions on CapEx Reduction

"The Economic Impact of Cloud Computing on Business Creation"

2011

Berkeley Research Group

Startups using cloud services reduced initial CapEx by up to 85% compared to traditional IT setups

"Cloud Computing as an Innovation Enabler for Tech Startups"

2013

International Journal of Business and Social Science

Cloud adoption led to a 40-50% reduction in startup IT infrastructure costs

"The Impact of Cloud Computing on Entrepreneurship"

2015

Journal of Small Business and Enterprise Development

Startups reported an average of 36% reduction in IT CapEx after moving to cloud services

"Cloud Computing and SME Creation"

2017

Technovation

Cloud services enabled a 60% reduction in initial IT infrastructure investments for tech startups

"The Role of Cloud Computing in Startup Growth"

2019

MIT Sloan Management Review

Startups using cloud services experienced a 78% reduction in upfront IT costs compared to on-premise solutions

"Cloud Computing and Startup Financial Performance"

2021

Journal of Business Venturing

Cloud adoption led to a 30-40% reduction in overall CapEx for software startups in their first two years


It seems too early to quantify the impact of artificial intelligence on software startup capex or operating expenses, but one might speculate that capex could be aided in the AI era by availability of “AI as a service,” as was the case for cloud computing as a service. 


Cost Category

Pre-AI Era (Approx. 2000-2010)

AI Era (Approx. 2020-Present)

Key Observations

Infrastructure (CapEx)

40-50%

10-20%

Significant reduction due to cloud computing and AI tools that minimize hardware investments.

Development Costs

30-40%

20-30%

AI tools streamline development processes, reducing labor costs and time to market.

Operational Expenses (OpEx)

20-30%

30-40%

Increased reliance on cloud services and AI tools leads to higher ongoing operational costs but improved efficiency.


On the other hand, perhaps some operating costs--such as coding personnel--could be lower, while cloud computing as a service costs are higher. 


Still, the cost of using “AI as a service” should continue to drop, both because of temporary GPU oversupply and competition as well as productivity enhancements of hardware, software and operations. 


Study Name

Date

Publisher

Key Conclusions

"The Impact of GPU Supply on Pricing and Market Dynamics"

2024

Jon Peddie Research

The oversupply of GPUs is expected to reduce prices by 20-30%, significantly lowering CapEx for startups relying on high-performance computing.

"Analyzing the Effects of Increased GPU Capacity on Startup Costs"

2024

McKinsey & Company

Startups could see a 25% reduction in initial CapEx due to increased competition among GPU suppliers and lower prices in the market.

"Future Trends in GPU Utilization for Startups"

2024

Gartner

The report predicts that startups will increasingly adopt cloud-based GPU solutions, leading to a shift from CapEx to OpEx models, with potential savings of up to 40% in IT costs.

"Market Analysis of GPUs and Their Impact on Emerging Technologies"

2023

IDC

The study highlights that the overbuilding of GPUs will enhance access for startups, allowing them to implement AI solutions with up to 30% lower upfront costs compared to previous years.

"The Economic Implications of GPU Overcapacity"

2023

Forrester Research

Forecasts indicate that startups could reduce their hardware investment by approximately 30% due to falling GPU prices resulting from oversupply.


If software startups primarily use "AI as a service" provided by hyperscale cloud computing giants, then computing capex might be limited, as has been the case for substitution of cloud computing for owned infrastructure in general. 

The impact on operating expense might be more varied, as cloud computing services are "opex." Also, it is conceivable that smaller code development teams will be necessary. 

Will Alphabet Have to Divest the Chrome Browser? Maybe Not.

Alphabet might be forced by the Department of Justice and courts to sell off the  Chrome browser as part of a settlement of an antitrust case against Alphabet. The implications--if the settlement does include such a provision--are less clear than one might think, based on the pattern of the earlier Microsoft antitrust settlement that forbade bundling of the Internet Explorer browser with the Windows operating system.


Some will argue that the case opened the door for emergence of Chrome and other browsers. Others will note that the settlement pushed Microsoft to invest in other areas. Microsoft's move into gaming (Xbox) and cloud computing (Azure) are examples. 


Everyone might agree that there were few, if any, long term adverse financial impacts for Microsoft. 


And, since use of Internet Explorer was at no cost to users in any case, there was little if any direct negative revenue impact. 


It is conceivable that, if ordered, a divestiture of the Chrome browser business would have short-term negative revenue effects for Alphabet, but probably little to no negative long-term effect on the firm. 


Since ownership of Chrome might principally deliver the value of browsing data that aids Alphabet’s advertising business, the possibility exists that Alphabet would shift to licensing access to such data from the new owner. That would add a cost, but might not be debilitating. 


Alphabet equity valuation should drop, at least temporarily, one might argue, as that happened to Microsoft equity as well, after the antitrust ruling.  


Also, Alphabet might move to create different ways of optimizing its advertising business, using different methods. 


There arguably are other benefits, such as the ability to influence new standards, but those benefits are hard to quantify.  


Some might note that Alphabet’s advertising business faces market share challenges from Amazon, TikTok and others, in any case, and that Alphabet's ad market share is falling.  


And all that assumes the DoJ’s recommendations are accepted by the courts. That is not a certainty, and might not even be the court’s preferred remedy. We might note that the DoJ had asked for Microsoft to be broken up. The actual remedy was a ban on bundling Internet Explorer with the Windows operating system.


Monday, November 18, 2024

AI and Quantum Change

Lots of people in their roles as retail investors are hearing lots about “artificial intelligence winners” these days, and much of the analysis is sound enough. There will be opportunities for firms and industries to benefit from AI growth. 


Even if relatively few of us invest at the angel round or are venture capitalists, most of us might also agree that AI seems a fruitful area for investment, from infrastructure (GPUs; GPU as a service; AI as a service; transport and data center capacity) to software. 


Likewise, most of us are, or expect soon to be, users of AI features in our e-commerce; social media; messaging; search; smartphone; PC and entertainment experiences.


Most of those experiences are going to be quite incremental and evolutionary in terms of benefit. Personalization will be more intensive and precise, for example. 


But we might not experience anything “disruptive” or “revolutionary” for some time. Instead, we’ll see small improvements in most things we already do. And then, at some point, we are likely to experience something really new, even if we cannot envision it, yet. 


Most of us are experientially used to the idea of “quantum change,”  a sudden, significant, and often transformative shift in a system, process, or state. Think of a tea kettle on a heated stove. As the temperature of the water rises, the water remains liquid. But at one point, the water changes state, and becomes steam.


Or think of water in an ice cube tray, being chilled in a freezer. For a long time, the water remains a liquid. But at some definable point, it changes state, and becomes a solid. 


That is probably how artificial intelligence will feature hundreds of evolutionary changes in apps and consumer experiences that will finally culminate in a qualitative change. 


In the history of computing, that “quantity becomes quality” process has been seen in part because new technologies reach a critical mass. Some might say these quantum-style changes result from “tipping points” where the value of some innovation triggers widespread usage. 


Early PCs in the 1970s and early 1980s were niche products, primarily for hobbyists, academics, and businesses. Not until user-friendly graphical interfaces were available did PCs seem to gain traction.


It might be hard to imagine, but GUIs that allow users to interact with devices using visual elements such as icons, buttons, windows, and menus, was a huge advance over command line interfaces. Pointing devices such as a  mouse, touchpad, or touch screen are far more intuitive for consumers than CLIs that require users to memorize and type commands.


In the early 1990s, the internet was mostly used by academics and technologists and was a text-based medium. The advent of the World Wide Web, graphical web browsers (such as  Netscape Navigator) and commercial internet service providers in the mid-1990s made the internet user-friendly and accessible to the general public.


Likewise, early smartphones (BlackBerry, PalmPilot) were primarily tools for business professionals, using keyboard interfaces and without easy internet access. The Apple iPhone, using a new “touch” interface, with full internet access, changed all that. 


The point is that what we are likely to see with AI implementations for mobile and other devices is an evolutionary accumulation of features with possibly one huge interface breakthrough or use case that adds so much value that most consumers will adopt it. 


What is less clear are the tipping point triggers. In the past, a valuable use case sometimes was the driver. In other cases it seems the intuitive interface was key. For smartphones it possibly was a combination of elegant interface; multiple-functions (internet access in the purse or pocket; camera replacement; watch replacement; PC replacement; plus voice and texting) 


The point is that it is hard to point to a single “tipping point” value that made smartphones a mass market product. While no single app universally drove adoption, several categories of apps--social media, messaging, navigation, games, utility and productivity-- all combined with an intuitive user interface, app stores and full internet access to make the smartphone a mass market product. 


Regarding consumer AI integrations across apps and devices, we might see a similar process. AI will be integrated in any evolutionary way across most consumer experiences. But then one particular crystallization event (use case, interface, form factor or something else) will be the trigger for mass adoption. 


The point is that underlying details of the infrastructure(operating systems, chipsets) do not drive end user adoption. What we tend to see is that some easy to use, valuable use case or value proposition suddenly emerges after a long period of gradual improvements. 


For a long time, we’ll be aware of incremental changes in how AI is applied to devices and apps. The changes will be useful but evolutionary. 


But, eventually, some crystallization event will occur, producing a qualitative change, as all the various capabilities are combined in some new way. 


“AI,” by itself, is not likely to spark a huge qualitative shift in consumer behavior or demand. Instead, a gradual accumulation of changes including AI will set the stage for something quite new to emerge.


That is not to deny the important changes in ways we find things, shop,  communicate, learn or play. For suppliers, it will matter whether AI displaces some amount of search; shifts retail volume or social media personalization. 


But users and consumers are unlikely to see disruptive new possibilities for some time, until ecosystems are more-fully built out and then some unexpected innovation finally creates a tipping point moment such as the “iPhone moment,” a transformative, game-changing event or innovation that disrupts an industry or fundamentally alters how people interact with technology, products, or services. 


It might be worth noting that such "iPhone moments" often involve combining pre-existing technologies in a novel way. The Tesla Model S, ChatGPT, Netflix, social media and search might be other examples. 


We’ll just have to keep watching.


Sunday, November 17, 2024

How Many Consumers "Use" Generative AI?

Daily use of generative artificial intelligence platforms might still be in the 11 percent of U.S. internet users, says Morgan Stanley Research. That likely refers to active use of chat-based language models, and absolutely underestimates the actual degree of passive usage.


Still, even the lower figure tracks with adoption of Facebook, perhaps a model for growth rates of  internet apps that become ubiquitous, Morgan Stanley suggests. But actual usage is higher, if passive.


source: Morgan Stanley Research


In one sense, asking consumers how they “use generative artificial intelligence” is unhelpful. Most consumers will encounter generative AI integrated into the platforms, tools, and services they already use daily, often in subtle and seamless ways. 


It’s akin to asking them how they have used AI in the context of online shopping or social media. They haven’t done anything specific, nor might they be aware their apps use AI to personalize and target content. 


source: Bain 


So consumers encounter AI-generated playlists, movie recommendations and advertising based on past behavior. The same holds for the actual content of their social media and news feeds. 


When shopping, they get hyper-personalized suggestions based on browsing history, mood, or context. In other cases they might use AI tools to “see how this item looks on you.”


Users also encounter AI-aided results when using search engines. 


Beyond that, it might be difficult to predict the primary value GenAI will come to represent for consumer users. Hyper-personalization is a likely candidate, but so far, users have been using GenAI as a research tool akin to search. 



Morgan Stanley researchers say present usage remains anchored by research, and I’d concur with that, based on my own usage. On the other hand, Morgan Stanley expects a broadening of use to include shopping, travel planning or recipes is growing. 

source: Morgan Stanley Research


Some of us use search to find out what time particular sports teams will be playing. Today, for the first time, I used GenAI to find out TV times for games I want to watch, so yes, the range of use cases is growing. 


Saturday, November 16, 2024

FTC Opens New Inquiry Into Microsoft Cloud Computng Practices

The U.S. Federal Trade Commission plans an investigation into Microsoft cloud computing practices, apparently licensing practices that tend to restrict customer ability to move data to other platforms and suppliers. 


The move probably illustrates for many the difficulties of regulating “competition” in the computing industry, when it  is characterized by complex and rapidly changing technologies. 


The fast pace of innovation can quickly make today’s possible problems vanish, only to be replaced by new issues. 


Some might argue that the Telecommunications Act of 1996, the first major revision of telecom  policy since 1934, focused on voice services competition, nearly completely missed the looming impact of the internet on the whole business. The Act assumed the key issue was competition for voice services, which rapidly ceased to be a relevant issue. 


Also, it often is difficult to define a market, as contestants often compete in multiple industry segments arguably related to each other. 


Perhaps more difficult is the growing importance of network effects. Many product markets now have a strong winner-take-all (or “winner take most”) character, based largely on natural economies of scale created by network effects (a product or service becomes more valuable as more people use it). 


For older voice networks, the value grew as the ability to call anybody (not just people in your town) grew. If all your friends and business associates are on one social network, it has the most value for you. 


If nearly all the things you buy are available on one e-commerce platform, it has the greatest value for you. If one payment method is accepted by virtually all the merchants you buy from, it has a strong network effect. 


The point is that in such markets, legitimate competition will tend to produce concentrated markets, without any anticompetitive behavior. 


The separate matter of how much such leadership helps propel leaders in one area to dominance in new or different markets often is the bigger issue for regulators. 


Also, assessing the existence of consumer harm is much harder when products are given away for free. The whole notion of “consumer harm” is hard to assess when there is no “price” paid by any user, and when size itself might be key to providing products “for free.”


Traditional antitrust analysis often focuses on price effects. The absence of monetary prices makes it difficult to measure direct consumer harm. As a result, all sorts of “non-price” effects have to be looked at, and that is rather more subjective.


Those effects might include product quality, innovation, privacy, and user experience or switching costs, all of which are necessarily subjective to a large extent. 


Of course, the move comes as a change of administration approaches, and many believe at least some regulatory action against hyperscalers could abate, though most assume oversight will remain elevated. 


In November 2023, the FTC began assessing cloud providers' practices in four key areas: competition, single points of failure, security, and artificial intelligence. 


The Microsoft inquiry is the latest of such moves. 


In January 2024, the FTC launched a formal inquiry into generative AI investments and partnerships, focusing on Alphabet, Amazon, Anthropic, Microsoft and OpenAI licensing terms and practices that might harm competition. 


Among other matters, the FTC is looking at the competitive impact of huge investments by hyperscalers into AI model firms, such as Microsoft's investment in OpenAI, and Google's and Amazon's ownership interests in Anthropic. 


At least part of the issue is hyperscaler ability to leverage their cloud computing leadership into new AI markets, the same sort of issue officials have targeted in the past. For the FTC, the issue often is preventing leading firms from leveraging existing market power to gain leadership of new markets as well. 


The Federal Trade Commission (FTC) and Department of Justice have histories of taking actions to protect competition in the computing industry, particularly focusing on preventing market leaders from leveraging their dominance in one area to gain unfair advantages in new or adjacent markets. 


The Microsoft Antitrust Case (1990s-2000s)by the Department of Justice focused on Microsoft's bundling of Internet Explorer with Windows, leveraging its operating system dominance to gain market share in web browsers. This resulted in a settlement in 2001, imposing restrictions on Microsoft's business practices.


The FTC’s Intel Antitrust Case (2009-2010) centered on the accusation that Intel used its dominant market position in central processing units s to stifle competition in the graphics processing unit  market. The case was settled in 2010, with Intel agreeing to modify its business practices.


The agency also opened an investigation into Google Search (2011-2013), asking whether Google was leveraging its search engine dominance to promote its own services unfairly.The FTC closed the investigation without major action.


The FTC also filed an antitrust lawsuit against Facebook (Meta) in 2020 alleging that Facebook's acquisitions of Instagram and WhatsApp were part of a strategy to maintain its social networking monopoly.


The Commission also investigated Amazon's MGM acquisition (2021-2022), focused on how Amazon might leverage the acquisition; its e-commerce and streaming dominance in the entertainment industry to reduce competition. The agency ultimately did not block the deal.  


Cloud computing practices also are under examination by the European Union and U.K. Competition and Markets Authority.


Friday, November 15, 2024

Have LLMs Hit an Improvement Wall, or Not?

Some might argue it is way too early to worry about a slowdown in large language model performance improvement rates. But some already voice concern, as OpenAI appears to see a slowdown in rates of  improvement. 


Gemini rates of improvement might also have slowed, and Anthropic might be facing similar challenges.   


To be sure, generative artificial intelligence language model size has so far shown a correlation with performance. More inputs--such as larger model size--have resulted in more output. 


source: AWS 


In other words, scaling laws exist for LLMs, as they do for machine learning and other aspects of AI. The issue is how long the current rates of improvement can last. 


Scaling laws describe the relationships between a model’s performance and its key attributes: size (number of parameters), training data volume, and computational resources. 


Scaling laws also imply that there are limits. At some point, the gains from improving inputs do not produce proportional output gains. So the issue is how soon LLMs might start to hit scaling law limits. 


Aside from the cost implications of ever-larger model sizes, there is the related matter of the availability of training data. At some point, as with natural resources (oil, natural gas, copper, gold, silver, rare earth minerals), LLMs will have used all the accessible, low-cost data. 


Other data exists, of course, but might be expensive to ingest. Think about the Library of Congress collection, for example. It is theoretically available, but the cost and time to “mine” it is likely more than any single LLM can afford. Nor is it likely any would-be provider could create (digitize) and supply such resources fast and affordably. 

source: Epoch AI 


Consider the cost to digitize and make available the U.S. Library of Congress collection. 


Digitization and metadata creation might cost $1 billion to $2 billion total, spread over five to 10 years, including the cost of digitizing and formatting:

  • Textual Content: $50 million - $500 million.

  • Photographic and Image Content: $75 million - $300 million.

  • Audio-Visual Content: $30 million - $120 million.

  • Metadata Creation and Tagging: Approximately 20-30% of total digitization costs ($200 million - $600 million).


I think the point is that with the speed of large language model updates (virtually continually in some cases, with planned model updates at least annually), no single LLM provider could afford to pay that much, and wait that long, for results. 


Then there are the additional costs of data storage, maintenance, and infrastructure, which could range from $20 million to $50 million annually. Labor costs might be in the range of $10 million to  $20 million annually as well.


Assuming the owner of the asset would want to license access to many other types of firms, sales, marketing, and customer support could add another $5 million to $10 million in annual costs.


The point is that even if an LLM wanted to spend $1 billion to $2 billion to gain access to the knowledge embedded in the U.S. Library of Congress, perhaps no LLM owner could afford to wait five years to a decade to derive the benefits. 


And that is just one example of a scaling law limit. The other issues are energy consumption; computing intensity and model parameter size. At some point, diminishing returns from additional investment would occur.


Directv-Dish Merger Fails

Directv’’s termination of its deal to merge with EchoStar, apparently because EchoStar bondholders did not approve, means EchoStar continue...