Friday, January 3, 2025

Where AI Could Save Consumers Money

Optimizing energy use is among the more-important use cases for artificial intelligence in the home. The biggest savings might come from optimizing washing machines, where AI could reduce energy consumption by as much as 70 percent. 


In other cases, passive monitoring could reduce water loss from leaks. 


Appliance

AI Use Case

Estimated Savings/Efficiency

Washing Machine

AI energy mode optimizes water and detergent usage

Up to 70% reduction in energy consumption

Clothes Dryer

AI detects fabric type and load size to optimize drying cycles

10-20% energy savings

Refrigerator

AI-powered meal planning and inventory management

Reduces food waste by 25-30%

Water Heater

AI learns usage patterns to heat water only when needed

10-15% reduction in energy costs

Indoor Plumbing

AI-powered leak detection and water flow optimization

Up to 15% reduction in water usage

Lawn Irrigation

AI analyzes weather data to adjust watering schedules

20-30% reduction in water consumption

Kitchen Appliances

AI-enabled smart ovens preheat based on meal selection

10-15% energy savings

 


Will Switch to Non-Fossil Fuels Actually Lead to Lower Electrical Bills?

Many studies argue that the cost of electricity for consumers will fall as the transition to non-fossil fuels gains traction, but since 2000, consumer electricity rates in the U.S. have generally increased. In fact,electricity rates in the United States have increased significantly since 1950, despite some periods of relatively low or stable rates. 


Despite that track record, advocates continue to argue that a switch to non-fossil fuels will lead to lower prices. 


Wholesale electricity prices are projected to decrease by 20 percent to 80 percent in the medium term (by 2040) in the United States, depending on the region, according to Brookings researchers. 


Other advocates argue that U.S. households could save an average of $500 a year on energy costs from non-fossil-fuel sources. And some advocates say cheaper energy is possible in the G7 countries by 2025. I doubt that can be claimed to be realistic at this point. 


In fact, there is no clear evidence that G7 country energy costs have declined since 2000, or even since 1950. In fact, the information suggests that energy costs have generally increased:

  • Electricity investments within the G7 are projected to triple in the coming decade, indicating rising costs rather than declining ones 1.

  • Household spending on electricity is expected to increase, although this increase is projected to be offset by declines in spending on coal, natural gas, and oil products 1.

  • The share of GDP spent on energy in G7 countries is expected to decline from around 7% today to just over 4% in 2050, but this is due to economic growth rather than falling energy costs 1.

  • While total household energy spending in the G7 has not declined since 2000 1.

  • The data shows that coal power capacity in G7 countries peaked in 2010 and has since fallen, but this doesn't necessarily translate to lower energy costs for consumers 2.


I find that energy prices for consumers will fall as the transition to non-fossil fuels is made to be questionable. 


Even granting some short-term price increases to create new infrastructure, the theory that long-term prices will drop seems questionable. Serious people used to argue that nuclear power would create such plentiful supplies that it would be “too cheap to meter,” and that never happened. 


One might note that many of the claims about future benefits come from studies conducted or sponsored by the IEA, hardly a disinterested industry source.


Tuesday, December 31, 2024

Our Use of AI Today is Like a 5-Year Old with Legos

Today--looking at what artificial intelligence can do--we are like five year olds with a box of Legos. We’ll build simple things at first. Eventually, we’ll have the ability to create realistic and detailed sailing ships, StarWars spacecraft and other objects that will move. 


In other words, we’ll have primitive but important use cases at first, eventually culminating in sophisticated and probably surprising future use cases that exceed what our imaginations can conjure. That tends to be the case for all general-purpose technologies. 


For the moment, AI represents a deepening and acceleration of trends that have characterized the digital age—personalization, customization, on-demand experiences, and context-aware interactions.


But the process should involve a lot of quantitative changes that culminate in qualitative change. And there’s a tantalizing lot of that incremental improvement. 


NotebookLM, Google’s (amazing!) engine for creating podcasts from text material, now apparently has the ability to support users asking questions to the AI podcast hosts. The feature is still experimental and apparently only works on newly-created audio overviews (podcasts), but it’s a significant development. 


“Using your voice, you can ask the hosts for more details or to explain a concept differently,” Google says.


Separately, Andrej Karpathy suggests the ability to query and interact with a large language model  in the context of reading text on a Kindle or other screen. 


Both are examples of directions LLMs might be going: not just creating podcasts from text but also interacting with text content in custom and conversational ways (asking questions, for example). Such a feature might entail the ability to ask the LLM to explain, discuss, argue or debate the merits of an idea or concept, based on the content in the source text. 


The point is, even before we start seeing really-functional agents, developers already are working on and conceiving of features that essentially harness LLMs to support personalized queries on specific content of interest, with multimedia interactions increasingly the norm.  


For the moment, though, the changes will likely be extensions of underlying changes: quantity rather than quality; “more” rather than “different.”


Digital platforms including social media, e-commerce, and streaming services have long used algorithms to tailor recommendations based on user preferences, browsing behavior, and past interactions.


AI introduces a more granular level of personalization, moving beyond demographic-based targeting to behavioral and contextual insights. Advanced AI systems can adapt in real time to an individual’s emotional state, preferences, and even predicted future behaviors.


Likewise, AI shifts customization from manual to automatic and anticipatory. Instead of users actively configuring their preferences, AI predicts and customizes interfaces, products, or services without explicit input.


AI also enhances the immediacy, convenience and relevance of on-demand experiences by predicting what a user will want next. The same might be said for the ability to supply content or experiences
“In context.”


Eventually, though, an accumulation of such improvements in context and personalization will enable a qualitative change in how people interact with computing devices. The inevitable question is “what new things will emerge?” 


Some might claim to know, but most of those predictions will prove wrong. Humans are never good at predicting the future. All we can say for sure is that if AI is a general-purpose technology, something quite new will emerge, on the order of the control of fire, domestication of animals, agriculture, the wheel, electricity, the internal combustion engine, computing and the internet. 


But AI’s qualitative changes will start with any number of new capabilities which extend our present digital experiences. 


AI Benefit in One Word

Even if the impact of personal computers, the internet and artificial intelligence will build on each other, if we really had to boil down the specific advances each brought (or should bring, in the case of AI), in a single word (which is difficult), it would likely look something like this:


Technology

Impact/Importance

Personal Computers

Productivity

Internet

Connectivity

AI

Automation 


Yes, PCs led to smartphones, which now are arguably the most-important personal computing device. 


Yes, the internet enabled communications, transactions and information sharing without borders (including social media, search, e-commerce and mobile commerce). Still, in a single word, connectivity is about the best moniker. 


We still have to see what AI brings, but the potential value of enhancing decision-making, automating complex tasks and enabling highly-personalized and contextual experiences, summed up in just one single word, is really automation, the ability to support both routine and more-complex operations to enhance human decision-making or some other human activity (whether based on sight, smell, hearing, speaking, writing, creating art, using muscle power or thinking). 


Monday, December 30, 2024

LLMs Will Have More Impact Than Earlier Digital Technologies, Study Suggests

If asked, most business leaders might say they haven’t yet seen much--if any--significant impact of generative artificial intelligence on employment at their firms. But that will change, a new study suggests. 


The rise of large language models (LLMs) might have an impact on a much larger share of workers 

than previous digital technologies, argues a new study published by the Organization for Economic Cooperation and Development. 


As is often the case, It may take some time before we observe shifts in employment figures in response to the impact of generative AI, the study suggests. 


An analysis of online job postings in the United States (Box 3.6) indicates no structural 

changes in hiring practices since Generative tools were launched. While these results may not fully

Across the OECD, around 26 percent  of workers are exposed to Generative AI, but only one percent are considered highly exposed. As with many big technological changes, impact will grow substantially with time. 


Eventually, though, up to 70 percent of workers could be exposed to Generative AI in the near future, with 39 percent of these considered highly exposed.


source: OECD  


And there are some perhaps-surprising impact estimates. Higher-paying occupations tend to be more 

exposed to Generative AI, while occupations heavily reliant on science and critical-thinking skills are less 

exposed on average. Most of us might not have concluded that “higher-paying” fields would be so affected. 


Likewise, some of us would be surprised that  jobs requiring more education orr training tend to, on average, be more exposed to Generative AI. 


source: OECD  


Saturday, December 28, 2024

AI Performance Improvement Will be "Stair Step" Rather than Continuous

Many observers now worry that artificial intelligence models are hitting an improvement wall, or that scaling the models will not bring the same level of improvements we have seen over the past few years. 

 That might be worrisome to some because of high levels of investment in the models themselves, before we actually get to useful applications that produce value and profits for businesses. 

Of course, some would note that, up to this point, large language model performance improvements have been based on the use of larger data sets or more processing power. 

And slowing rates of improvement suggest that further value using just those two inputs might be reaching its current limit.

 

 Of course, some of us might note that there is a sort of “stair step” pattern to computing improvements, including chipsets, hardware and most software. Moore's Law, where the doubling of transistor density on integrated circuits happens about every two years, is a prime example of stairstep progress. 

The expansion of internet bandwidth also tends to follow this pattern, as do capacity improvements on backbone and access networks, fixed and mobile. 

The evolution of operating systems, smartphones and productivity tools also often sees periods of rapid innovation followed by stabilization for a time, before the next round of upgrades.

So concern about maturing scaling laws, while apt, does not prevent us uncovering different architectures and methods for significant performance improvement. 

Thursday, December 26, 2024

Energy Consumption Does Not Scale with Work Loads as Much as You Think

Most observers will agree that data center energy efficiency (and carbon and other emissions footprint) is an important issue, if for no other reason than compliance with government regulations. And with cloud computing and data center compute cycles trending higher (more data centers, larger data centers, additional artificial intelligence workloads, more computing, more cloud computing, more content delivery), energy consumption and its supply will continue to be important issues. 


source: Goldman Sachs 


Perhaps the good news is that energy consumption does not scale linearly with the increase in compute cycles, storage or heat dissipation, though some might argue that data center energy consumption estimates are too low.  


From 2010 to 2018, data center computing cycles increased dramatically:

  • Data center workloads increased more than sixfold (over 500 percent)

  • Internet traffic increased tenfold (1000 percent).

  • Storage capacity rose by 25 times (2500 percent).


But data center energy consumption only grew by about six percent during this period. 


All that might be worth keeping in mind, as it seems data center computing operations are destined to increase in volume. And though efficiencies will happen, it will be difficult to offset the impact of increased compute volume. 


It might also be worth noting that computing workloads also happen on end user devices of all types, including AI inference operations on smartphones, for example. 


If we assume that In 2020, the information and communication technology sector as a whole, including data centers, networks and user devices, consumed about 915 TWh of electricity, or four percent to six percent of all electricity used in the world, and if data centers specifically consumed less than two percent of that total (maybe one percent to 1.8 percent globally), then all the other parts of the ecosystem (devices, software mostly “at the edge”) might have consumed two- to four-percent of total energy in the information and communications industries (including networks, cell towers and so forth as well as end user devices). 


Still, many end user devices--especially smartphones --actually consume very little energy, even assuming inference operations are added to the processing load. Charging a phone once a day uses 0.035 kilowatt-hours (kWh) of electricity per week, 0.15 kWh per month, and about 1.83 kWh per year. 


In the United States, that works out to energy costs of 40 cents or less per year. That is almost too small an amount to measure. 

source: EnergySage 


At an average electricity price of $0.13 per kWh, this translates to approximately 1.25 billion kWh or 1.25 TWh of electricity consumed annually for smartphone charging.


The implication is that even common AI inference operations on smartphones are not going to be too meaningful a source of energy consumption. 


For example, assuming 250 million smartphone users in the United States and an average annual charging cost of $0.65 per phone, 250 million users * $0.65 per year implies $162.5 million in electricity costs annually. 


That is less than 0.1 percent of the total electrical consumption for the United States in a year.


Perhaps the point is that AI inference operations we can run on smartphones (probably centered on personalization, photo and voice interface operations) are a wise choice.


AWS, Azure, Google Cloud Market Share: Definitions Matter

Compared to Amazon or Alphabet, Microsoft has a greater percentage of its revenue generated by “cloud” services, in large part because Micro...