Tuesday, December 31, 2024

Our Use of AI Today is Like a 5-Year Old with Legos

Today--looking at what artificial intelligence can do--we are like five year olds with a box of Legos. We’ll build simple things at first. Eventually, we’ll have the ability to create realistic and detailed sailing ships, StarWars spacecraft and other objects that will move. 


In other words, we’ll have primitive but important use cases at first, eventually culminating in sophisticated and probably surprising future use cases that exceed what our imaginations can conjure. That tends to be the case for all general-purpose technologies. 


For the moment, AI represents a deepening and acceleration of trends that have characterized the digital age—personalization, customization, on-demand experiences, and context-aware interactions.


But the process should involve a lot of quantitative changes that culminate in qualitative change. And there’s a tantalizing lot of that incremental improvement. 


NotebookLM, Google’s (amazing!) engine for creating podcasts from text material, now apparently has the ability to support users asking questions to the AI podcast hosts. The feature is still experimental and apparently only works on newly-created audio overviews (podcasts), but it’s a significant development. 


“Using your voice, you can ask the hosts for more details or to explain a concept differently,” Google says.


Separately, Andrej Karpathy suggests the ability to query and interact with a large language model  in the context of reading text on a Kindle or other screen. 


Both are examples of directions LLMs might be going: not just creating podcasts from text but also interacting with text content in custom and conversational ways (asking questions, for example). Such a feature might entail the ability to ask the LLM to explain, discuss, argue or debate the merits of an idea or concept, based on the content in the source text. 


The point is, even before we start seeing really-functional agents, developers already are working on and conceiving of features that essentially harness LLMs to support personalized queries on specific content of interest, with multimedia interactions increasingly the norm.  


For the moment, though, the changes will likely be extensions of underlying changes: quantity rather than quality; “more” rather than “different.”


Digital platforms including social media, e-commerce, and streaming services have long used algorithms to tailor recommendations based on user preferences, browsing behavior, and past interactions.


AI introduces a more granular level of personalization, moving beyond demographic-based targeting to behavioral and contextual insights. Advanced AI systems can adapt in real time to an individual’s emotional state, preferences, and even predicted future behaviors.


Likewise, AI shifts customization from manual to automatic and anticipatory. Instead of users actively configuring their preferences, AI predicts and customizes interfaces, products, or services without explicit input.


AI also enhances the immediacy, convenience and relevance of on-demand experiences by predicting what a user will want next. The same might be said for the ability to supply content or experiences
“In context.”


Eventually, though, an accumulation of such improvements in context and personalization will enable a qualitative change in how people interact with computing devices. The inevitable question is “what new things will emerge?” 


Some might claim to know, but most of those predictions will prove wrong. Humans are never good at predicting the future. All we can say for sure is that if AI is a general-purpose technology, something quite new will emerge, on the order of the control of fire, domestication of animals, agriculture, the wheel, electricity, the internal combustion engine, computing and the internet. 


But AI’s qualitative changes will start with any number of new capabilities which extend our present digital experiences. 


AI Benefit in One Word

Even if the impact of personal computers, the internet and artificial intelligence will build on each other, if we really had to boil down the specific advances each brought (or should bring, in the case of AI), in a single word (which is difficult), it would likely look something like this:


Technology

Impact/Importance

Personal Computers

Productivity

Internet

Connectivity

AI

Automation 


Yes, PCs led to smartphones, which now are arguably the most-important personal computing device. 


Yes, the internet enabled communications, transactions and information sharing without borders (including social media, search, e-commerce and mobile commerce). Still, in a single word, connectivity is about the best moniker. 


We still have to see what AI brings, but the potential value of enhancing decision-making, automating complex tasks and enabling highly-personalized and contextual experiences, summed up in just one single word, is really automation, the ability to support both routine and more-complex operations to enhance human decision-making or some other human activity (whether based on sight, smell, hearing, speaking, writing, creating art, using muscle power or thinking). 


Monday, December 30, 2024

LLMs Will Have More Impact Than Earlier Digital Technologies, Study Suggests

If asked, most business leaders might say they haven’t yet seen much--if any--significant impact of generative artificial intelligence on employment at their firms. But that will change, a new study suggests. 


The rise of large language models (LLMs) might have an impact on a much larger share of workers 

than previous digital technologies, argues a new study published by the Organization for Economic Cooperation and Development. 


As is often the case, It may take some time before we observe shifts in employment figures in response to the impact of generative AI, the study suggests. 


An analysis of online job postings in the United States (Box 3.6) indicates no structural 

changes in hiring practices since Generative tools were launched. While these results may not fully

Across the OECD, around 26 percent  of workers are exposed to Generative AI, but only one percent are considered highly exposed. As with many big technological changes, impact will grow substantially with time. 


Eventually, though, up to 70 percent of workers could be exposed to Generative AI in the near future, with 39 percent of these considered highly exposed.


source: OECD  


And there are some perhaps-surprising impact estimates. Higher-paying occupations tend to be more 

exposed to Generative AI, while occupations heavily reliant on science and critical-thinking skills are less 

exposed on average. Most of us might not have concluded that “higher-paying” fields would be so affected. 


Likewise, some of us would be surprised that  jobs requiring more education orr training tend to, on average, be more exposed to Generative AI. 


source: OECD  


Saturday, December 28, 2024

AI Performance Improvement Will be "Stair Step" Rather than Continuous

Many observers now worry that artificial intelligence models are hitting an improvement wall, or that scaling the models will not bring the same level of improvements we have seen over the past few years. 

 That might be worrisome to some because of high levels of investment in the models themselves, before we actually get to useful applications that produce value and profits for businesses. 

Of course, some would note that, up to this point, large language model performance improvements have been based on the use of larger data sets or more processing power. 

And slowing rates of improvement suggest that further value using just those two inputs might be reaching its current limit.

 

 Of course, some of us might note that there is a sort of “stair step” pattern to computing improvements, including chipsets, hardware and most software. Moore's Law, where the doubling of transistor density on integrated circuits happens about every two years, is a prime example of stairstep progress. 

The expansion of internet bandwidth also tends to follow this pattern, as do capacity improvements on backbone and access networks, fixed and mobile. 

The evolution of operating systems, smartphones and productivity tools also often sees periods of rapid innovation followed by stabilization for a time, before the next round of upgrades.

So concern about maturing scaling laws, while apt, does not prevent us uncovering different architectures and methods for significant performance improvement. 

Thursday, December 26, 2024

Energy Consumption Does Not Scale with Work Loads as Much as You Think

Most observers will agree that data center energy efficiency (and carbon and other emissions footprint) is an important issue, if for no other reason than compliance with government regulations. And with cloud computing and data center compute cycles trending higher (more data centers, larger data centers, additional artificial intelligence workloads, more computing, more cloud computing, more content delivery), energy consumption and its supply will continue to be important issues. 


source: Goldman Sachs 


Perhaps the good news is that energy consumption does not scale linearly with the increase in compute cycles, storage or heat dissipation, though some might argue that data center energy consumption estimates are too low.  


From 2010 to 2018, data center computing cycles increased dramatically:

  • Data center workloads increased more than sixfold (over 500 percent)

  • Internet traffic increased tenfold (1000 percent).

  • Storage capacity rose by 25 times (2500 percent).


But data center energy consumption only grew by about six percent during this period. 


All that might be worth keeping in mind, as it seems data center computing operations are destined to increase in volume. And though efficiencies will happen, it will be difficult to offset the impact of increased compute volume. 


It might also be worth noting that computing workloads also happen on end user devices of all types, including AI inference operations on smartphones, for example. 


If we assume that In 2020, the information and communication technology sector as a whole, including data centers, networks and user devices, consumed about 915 TWh of electricity, or four percent to six percent of all electricity used in the world, and if data centers specifically consumed less than two percent of that total (maybe one percent to 1.8 percent globally), then all the other parts of the ecosystem (devices, software mostly “at the edge”) might have consumed two- to four-percent of total energy in the information and communications industries (including networks, cell towers and so forth as well as end user devices). 


Still, many end user devices--especially smartphones --actually consume very little energy, even assuming inference operations are added to the processing load. Charging a phone once a day uses 0.035 kilowatt-hours (kWh) of electricity per week, 0.15 kWh per month, and about 1.83 kWh per year. 


In the United States, that works out to energy costs of 40 cents or less per year. That is almost too small an amount to measure. 

source: EnergySage 


At an average electricity price of $0.13 per kWh, this translates to approximately 1.25 billion kWh or 1.25 TWh of electricity consumed annually for smartphone charging.


The implication is that even common AI inference operations on smartphones are not going to be too meaningful a source of energy consumption. 


For example, assuming 250 million smartphone users in the United States and an average annual charging cost of $0.65 per phone, 250 million users * $0.65 per year implies $162.5 million in electricity costs annually. 


That is less than 0.1 percent of the total electrical consumption for the United States in a year.


Perhaps the point is that AI inference operations we can run on smartphones (probably centered on personalization, photo and voice interface operations) are a wise choice.


Wednesday, December 25, 2024

U.S. Cable Operators Will Lose Home Broadband Share, But How Much, and to Whom?


Comcast says it will lose about 100,000 home broadband accounts in the fourth quarter of 2024, a troublesome statistic given that service’s past-decade role in fueling company revenue growth. 


By most estimates, the U.S. cable operators will lose market share to other contestants to 2030. The issue is “to whom” the losses will occur. By volume, the shift to telcos is likely to be the biggest. Satellite access might gain, but the magnitude remains unclear. Share held by third-party independents might not change. 


ISP Segment

2025 Market  Share

2030 Market Share

Key Drivers

Cable TV Providers

58%

45%

  • Increasing competition from 5G fixed wireless

  • Legacy infrastructure becoming less competitive

  • Price pressure from new entrants

Telcos (Combined)

30%

38%

  • 5G fixed wireless growth in suburban areas

  • Fiber deployment acceleration

  • Mobile/fixed service bundling

Satellite

7%

12%

  • LEO constellation maturity (Starlink, Project Kuiper)

  • Improved latency and speeds

  • Rural market penetration

Independent ISPs

5%

5%

  • Municipal networks growth

  • Local fiber deployments

  • Consolidation pressure from larger players


The issue is growing competition for new fixed wireless services on one end of the demand spectrum, plus fiber-to-home services on the other end. Put simply, fixed wireless seems to be taking market share from cable services among customers content to buy services offering 100 Mbps to 200 Mbps of downstream bandwidth, while FTTH is taking share among customers who want 1 Gbps or faster, and sometimes more upstream bandwidth. 


In my own case, I can get around 1 Gbps from both my hybrid fiber coax provider and a FTTH provider. That isn’t the issue. The HFC upstream runs at about 17 Mbps. The FTTH connection is reliably operating at 940 Mbps. 


And the point is not that I “need” 940 Mbps upstream. I don’t. The point is that upstream performance is 55 times greater for the FTTH provider than the HFC provider, at zero cost premium. 


For that matter, I don’t “need” 1 Gbps in the downstream direction, either. The point is that I wouldn’t consider buying any service operating at speeds less than 1 Gbps. It is not a matter of “need” but of preference or “want.”


Somewhat ironically, U.S. cable TV operators face almost the same issues as do telcos when pondering upgrades of their legacy networks. Traditionally, telcos have had to fund a complete replacement of their copper access networks with fiber-to-home platforms to support broadband services. 


And telcos have generally tried to be rational about the capital expenditures, generally deploying FTTH in greenfield areas (new home construction, for example). But that might only represent about one percent to two percent of housing locations per year. At that rate, it will take quite some time to complete a full transition to FTTH. 


Cable operators face the same dilemma. 


Telcos also have justified FTTH upgrades in neighborhoods where demand is greater and willingness to pay is higher. Cable operators might make the same decisions. 


And much hinges on changes in customer demand for symmetrical bandwidth and faster speeds, as there is a point where HFC cannot compete with FTTH (perhaps at about 10 Gbps). That might give cable operators about a decade of running room before a network replacement is required. 


That might assume that “typical” U.S. home broadband speeds reach 1 Gbps by perhaps 2026, with upgrades beyond that to 3 Gbps to 10 Gbps over a decade. 


But that also assumes the key issue is downstream bandwidth, not “symmetrical” or “more nearly symmetrical” bandwidth. Though most observers arguably do not believe upstream bandwidth symmetry is a huge issue for the near future, its importance seems likely to grow. The issue is whether demand for symmetry grows slowly or faster. 


Market demand for products sometimes is not based on “need” but “want,” and some users might already make buying decisions as though symmetrical bandwidth is preferable, even if no application currently requires it, and even if multi-user demands do not require it. 


source: ITIF 


So bandwidth demand beyond the capabilities of the HFC network will force a platform upgrade that telcos already have been facing with the upgrade to FTTH from copper access, even if HFC has a more-evolutionary path remaining, before a full platform shift is necessary. 


Cable operators have been able to gradually and incrementally upgrade their once-copper networks to hybrid networks featuring fiber backbones and retaining copper distribution. But a disruption is coming. No matter how far cable operators extend fiber closer to end user locations, increasingly more-difficult adaptations are necessary. 


Traditionally, the simple remedy was to replace coaxial cable in the backbone with fiber, which was fairly simple, as the rest of the network remained untouched. But moving in the direction of more-symmetrical bandwidth is tougher, requiring revamping all active elements of the copper network. 


High-split hybrid fiber coax networks allocate up to 204 MHz for upstream traffic, compared to only 42 MHz (USA) or 65 MHz (Europe) in sub-split networks. That represents as much as five times more upstream capacity compared to 42-MHz sub-split upstreams.


But even a high-split network will not be able to support symmetrical bandwidth, as FTTH systems now do. So long as customers do not demand symmetrical bandwidth, perhaps that is not an existential issue. 


But if the market shifts to a preference for symmetrical bandwidth, cable operators will, at some point, have to invest quite a bit more than they presently do in network capital investment, as they will essentially have to replace HFC with FTTH access networks. 


There also is a new wrinkle, namely that some demand for lower-bandwidth connections apparently has grown for fixed wireless alternatives. 


We can see that demand shift in statistics on home broadband net gains and losses. 


Company

Q1 2024 Net Broadband Subscribers

Q2 2024 Net Broadband Subscribers

Total Net Additions (Losses) Q1, Q2

Charter

(81,000) losses

(72,000) losses

(153,000) losses

Comcast

(38,000) losses

(34,000) losses

(72,000) losses

AT&T

Slight gains

Slight gains

Approximately 50,000 gains

Verizon

Minor losses

Minor losses

Approximately (50,000) losses

T-Mobile

226,000 gains

246,000 gains

Approximately 472,000 gains


Company

Net Change (Q3 2024)

Charter

-113,000

Comcast

-87,000

AT&T

+50,000

Verizon

+28,000 (Fios) plus 363,000 fixed wireless

T-Mobile

+415,000 fixed wireless

Where AI Could Save Consumers Money

Optimizing energy use is among the more-important use cases for artificial intelligence in the home. The biggest savings might come from opt...