Tuesday, December 24, 2024

AI "Performance Plateau" is to be Expected

There is much talk now about generative artificial intelligence model improvement rates slowing. But such slowdowns are common for most--if not all--technologies. In fact, "hitting the performance plateau," is common. 


For generative AI, the “scaling” problem is at hand. The generative AI scaling problem refers to diminishing returns from increasing model size (number of parameters), the amount of training data, or computational resources.


In the context of generative AI, power laws describe how model performance scales with increases in resources such as model size, dataset size, or compute power. And power laws suggest performance gains will diminish as models grow larger or are trained on more data.


Power laws also mean that although model performance improves with larger training datasets, but the marginal utility of more data diminishes.


Likewise, the use of greater computational resources yields diminishing returns on performance gains.


But that is typical for virtually all technologies: performance gains diminish as additional inputs are increased. Eventually, however, workarounds are developed in other ways. Chipmakers facing a slowing of Moore’s Law rates of improvement got around those limits by creating multi-layer chips, using parallel processing or specialized architectures for example


Technology

Performance Plateau

Key Challenges

Breakthroughs or Workarounds

Steam Engines

Efficiency plateaued due to thermodynamic limits (Carnot cycle).

Material limitations and lack of advanced thermodynamics.

Development of internal combustion engines and electric motors.

Railroads

Speed and efficiency stagnated with steam locomotives.

Limited by steam engine performance and infrastructure capacity.

Introduction of diesel and electric trains.

Aviation

Propeller-driven planes hit speed and altitude limits (~400 mph).

Aerodynamic inefficiency and piston engine limitations.

Jet engines enabled supersonic and high-altitude flight.

Telecommunications

Copper wire networks reached data transmission capacity limits.

Signal attenuation and bandwidth limitations of copper cables.

Transition to fiber-optic technology and satellite communication.

Automotive Engines

Internal combustion engine efficiency (~30% thermal efficiency).

Heat losses and material constraints in engine design.

Adoption of hybrid and electric vehicle technologies.

Semiconductors (Moore's Law)

Scaling transistors beyond ~5 nm became increasingly difficult.

Quantum tunneling, heat dissipation, and fabrication costs.

Development of chiplets, 3D stacking, and quantum computing.

Renewable Energy (Solar)

Silicon solar cells plateaued at ~20–25% efficiency.

Shockley-Queisser limit and cost of advanced materials.

Emerging technologies like perovskite solar cells and tandem cells.

Battery Technology

Lithium-ion batteries plateaued at energy density (~300 Wh/kg).

Materials science constraints and safety issues.

Development of solid-state batteries and alternative chemistries.

Television Display Technology

LCD and OLED reached practical resolution and brightness limits.

Manufacturing cost and diminishing returns in visual quality.

Introduction of micro-LED and quantum dot technologies.


The limits of scaling laws for generative AI will eventually be overcome. But a plateau is not unexpected. 


The Fulcrum of Human History

As some see it, not even artificial intelligence is the "fulcrum of human history," the central event around which all other historical occurrences can be understood. Blessings to all, irrespective of belief. 


"God bless us everyone"

Monday, December 23, 2024

AI's "iPhone Moment" Will Come. We Just Don't Know When

Some observers might be underwhelmed with the current state of smartphone AI use cases, as they might see somewhat-limited value for other artificial intelligence use cases. There has not yet been an equivalent of an “iPhone moment” when value crystallized in a new way. 


But that is a common theme for any new computing technology. 


In fact, we might argue that prior “iPhone” moments have happened for prior waves of computing technology.


The introduction of the IBM PC in 1981 was a pivotal moment for personal computing within the business world. 


The launch of the Apple Macintosh in 1984 popularized the graphical user interface, revolutionizing how people interacted with computers and making computing more intuitive and accessible to a broader audience.


The Mosaic web browser release in 1993 played a crucial role in popularizing the World Wide Web, making the internet more user-friendly and visually appealing.


The launch of the App Store in 2008 created a new ecosystem for mobile software. 


The debut of Siri on the iPhone 4S in 2011 changed how people interacted with their smartphones.


There arguably is a predictable pattern of new technology incremental improvement, infrastructure development, and the creation of compelling use cases, even if the first implementations are unspectacular. 


Network effects might often explain why value increases over time, but attractive experiences people desire also have to be created. And that typically takes some time and much trial and error, plus creation of ecosystems of capability. 


Ride hailing doesn’t work without smartphones. E-commerce doesn’t work without secure and easy payments. Visual media doesn’t work without broadband. Food delivery doesn’t work without smartphones, location ability, navigation, ordering, payment and fulfillment systems. 


Internet value, for example, grew over time. In the 1970s and 1980s the internet was primarily a text-based tool for researchers and government agencies, used for sharing files and messages.


The World Wide Web brought user-friendly multimedia browsers while internet access moved from slow dial-up to broadband.


Likewise, early web apps were static and limited, offering basic interactivity like online forms. Today’s apps are highly-dynamic, personalized and capable of transaction support of many types. 


Cloud computing, social media, search and e-commerce likewise progressed in similar fashion. 


And there are network effects. Online maps lead to turn-by-turn directions (navigation) to contextual information to ride hailing using smartphones. 


The point is that usefulness develops over time as the ecosystem grows; the platforms improve and innovators create new and desired experiences. 


The “iPhone moment” for smartphone AI might likewise take some time. But it will come.


Sunday, December 22, 2024

Satya Nardella Thinks Agentic AI Will Replace SaaS

 

Agentic AI Could Change User Interface (Again)

The annual letter penned by Satya Nadella, Microsoft CEO, points out the hoped-for value of artificial intelligence agents which “can take action on our behalf.” 


That should have all sorts of implications. Today, users typically issue commands or input data, and software executes tasks. With agentic AI, software would do things on a user’s behalf without some amount of explicit work on the user’s behalf. 


When arranging a meeting about a subject, the agent might query attendee calendars, send out invites and prepare an agenda, instead of the many steps a human might otherwise undertake. 


That might change the way some types of software are created, allowing non-technical people to create apps. A user might tell an agent to “build a basic web app for a recipe database,” without coding knowledge. 


Lots of other manual tasks might also be automated. Think of photo tags. Instead of manual tag creation, an agent could automatically tag photos and create collections. 


Agents might draft routine reports or monitor and adjust system performance, without active human intervention. Where today software “waits” for a directive, agents would work in the background, anticipating what needs to be done, and often doing that. 


Agents could also enhance levels of personalization already based on user behavior and preferences that might not always be explicitly stated. 


There are several key changes in user interaction with computers and software. First, a shift in user interface: “a new natural user interface that is multimodal,” he says. 


Think back to the user interfaces of the past, and the progression. We started with command line interfaces requiring typing on a keyboard in a structured way. No audio, no video, no speech, no gestures, no mouse or pointing. 


Over time, we got graphical, “what you see is what you get” mouse-oriented interactions, which were a huge improvement over command line interfaces. Graphical interfaces meant people could use and control computers without the former technical knowledge. 


Era

Time Period

Interface Type

Key Features

Impact on Usability

Batch Processing

1940s–1950s

Punch Cards

Input via physical cards with holes representing data and commands.

Required specialized knowledge; interaction was slow and indirect.

Command-Line Interfaces (CLI)

1960s–1980s

Text-Based Commands

Typing commands into a terminal to execute programs or tasks.

Greater flexibility for users but required memorization and technical expertise.

Graphical User Interfaces (GUI)

1980s–1990s

Visual Desktop Interface

WYSIWYG (What You See Is What You Get) design; icons, windows, and mouse control.

Made computers accessible to non-technical users; revolutionized personal computing.

Web-Based Interfaces

1990s–2000s

Internet Browsers

Interfacing through websites using hyperlinks and forms.

Simplified information access and expanded computer use to online interactions.

Touchscreen Interfaces

2007–present

Multi-Touch Gestures

Direct manipulation of elements on-screen using fingers.

Intuitive for all age groups; foundational for smartphones and tablets.

Voice Interfaces

2010s–present

Natural Language Commands

Voice assistants like Siri, Alexa, and Google Assistant.

Enabled hands-free operation but often struggles with context and nuance.


Beyond that, AI should bring multimodal and multimedia input and output” speech, images, sound and video. Not just natural language interaction, but multimedia input and output as well.


Beyond that, software will become more anticipatory and more able to “do things” on a user’s behalf. 


Nadella places that within the broader sweep of computing. “Can computers understand us instead of us having to understand computers?”


“Can computers help us reason, plan, and act more effectively” as we digitize more of the world’s information?


The way people interact with software also could change. Instead of “using apps” we will more often “ask questions and get answers.”


Nvidia Jetson Gives Generative AI It's Own Platform


 

Now here’s a switch: Nvidia hobbyist AI computers, supporting generative artificial intelligence apps (chatbots, for example) created using the Nvidia Jetson platform. 

The new machines are designed for hobbyists, commercial AI developers, and students to create AI applications, including chatbots and robotics, for example. 

Jetson modules are optimized for running AI models directly on embedded systems, reducing the need for sending data back and forth to the cloud, crucial for real-time decision-making in applications such as robotics, drones, autonomous vehicles, smart cameras, and industrial Internet of Things, for example. 

The developer kit features an NVIDIA Ampere architecture GPU and a 6-core Arm CPU, supporting multiple edge AI applications and high-resolution camera inputs and sells for about $250. 

 The Jetson modules are designed to enable edge AI applications that require real-time, high-performance processing at the edge rather than in a centralized cloud environment. Use cases might include autonomous vehicle real-time image and sensor data processing to enable navigation and decision-making. 

Robotics use cases include object recognition, motion planning, and human-robot interaction. The modules also can support smart cameras for security applications that require object detection, face recognition, and anomaly detection. 

Industrial Internet of Things use cases include the monitoring of machinery and systems used for real-time analysis. Unmanned aerial vehicles use cases include visual navigation, obstacle avoidance, and image-based inspection. 

At least in one respect, the Jetson is a sort of upside-down case of computer development. Personal computers started out as hobbyist machines entirely for edge computing (though we did not use the term at the time). 

Connected computing at remote locations (cloud computing) developed later. For AI, sophisticated remote processing in the cloud or enterprise-style data centers happened first, and now we get development of platforms aimed strictly at edge, “autonomous” computing without the requirement for connection to remote processing.

Friday, December 20, 2024

Will AI Actually Boost Productivity and Consumer Demand? Maybe Not

A recent report by PwC suggests artificial intelligence will generate $15.7 trillion in economic impact to 2030. Most of us, reading, seeing or hearing that estimate, will reflexively assume that means an incremental boost in global economic growth of that amount. 


Actually, even PwC says it cannot be sure of the net AI impact, taking into account all other growth-affecting events and trends. AI could be a positive, but then counteracted by other negative trends. 


Roughly 55 percent of the gains are estimated from “productivity” advances, including automation of routine tasks, augmenting employees’ capabilities and freeing them up to focus on more stimulating and higher value adding work, says PwC. 


As much as we might believe those are among the benefits, most of us would also agree we would find them hard to quantify too closely. 


About 68 percent will come from a boost in consumer demand: “higher quality and more

personalized products and service” as well as “better use of their time,” PwC says. Again, that seems logical enough, if likewise hard to quantify. 


source: PwC 


Just as important, be aware of the caveats PwC also offers. “Our results show the economic impact of AI only: our results may not show up directly into future economic growth figures,” the report states.


In other words, lots of other forces will be at work. Shifts in global trade policy, financial booms and busts, major commodity price changes and geopolitical shocks are some cited examples. 


The other issue is the degree to which AI replaces waning growth impact from older, maturing technologies and growth drivers, and how much it could be additive.


“It’s very difficult to separate out how far AI will just help economies to achieve long-term average growth rates (implying the contribution from existing technologies phase out over time) or simply be additional to historical average growth rates (given that these will have factored in major technological advances of earlier periods),” PwC consultants say. 


In other words, AI might not have a lot of net additional positive impact if it also has to be counterbalanced by declining impact from legacy technologies and other growth drivers. 


Thank PwC consultants for reminding us how important assumptions are when making forecasts about artificial intelligence or anything else. 


If Time is Money, and IT Saves Time, is that ROI?

A survey by IDC commissioned by Microsoft focusing on ways Copilot saves time and therefore increases productivity. It’s a good example of ...