Thursday, September 25, 2025

Every Important New Automation Technology Causes Some Job Losses: Get Over It

It is not unusual for enterprise leaders to suggest artificial intelligence will lead to some job losses. In fact, it would be difficult to think of any instances of work automation that have not led to job losses in traditional settings. 


The Industrial Revolution provides some of the most dramatic examples of automation's effects. The invention of the steam engine and mechanized looms displaced countless textile artisans, leading to the Luddite movement where workers protested by destroying machinery. Later, the widespread adoption of the internal combustion engine and tractors decimated agricultural jobs and occupations tied to horses, like blacksmiths and stable hands.


In the 20th century, automated telephone switchboards replaced manual telephone operators, and the introduction of the Automated Teller Machine (ATM) reduced the number of bank tellers needed for routine transactions. More recently, self-service kiosks and online shopping have lessened the demand for cashiers and retail workers.


Technology

Replaced Human Role

How It Replaced the Role

Mechanized Loom

Textile Artisan (Weaver)

Automated the process of weaving, allowing a single machine to produce fabric much faster than a human.

Tractor/Harvester

Agricultural Laborer

Automated the manual and animal-powered tasks of farming, such as plowing and harvesting.

Telephone Switchboard

Telephone Operator

Automated the connection of phone calls, eliminating the need for human operators to manually plug lines.

Automated Teller Machine (ATM)

Bank Teller

Automated routine banking tasks like cash withdrawals and deposits, reducing the need for tellers.

Self-Checkout Kiosk

Cashier/Retail Worker

Automated the process of scanning and paying for goods in a retail environment.

Robotic Assembly Line

Factory Worker

Automated repetitive and dangerous tasks in manufacturing, such as welding and lifting heavy parts.

GPS & Digital Maps

Navigator/Pilot

Automated navigation and route planning, reducing the need for human expertise in these areas.



The new AI at Work Report 2025 published by Indeed suggests that more than a quarter (26 percent) of jobs posted on Indeed in the past year could be “highly” transformed by generative artificial intelligence apps. 


Some 54 percent of jobs are likely to be “moderately” transformed.


The study suggests 46 percent of skills in a typical U.S. job posting are poised for “hybrid transformation.” Human oversight will remain critical when applying these skills, but GenAI can already perform a significant portion of routine work.


As you might guess, software development and other cognitive functions are most likely to be affected, while jobs with high human contact, emotional intelligence or physical elements will be least affected. 


source: HiringLab.org 


Consider nursing, which is relatively immune from wholesale substitution effects. 


source: HiringLab.org 


In contrast, many more of the software development functions are likely to be affected. 


source: HiringLab.org 


Of the close to 3,000 requirements analyzed, the two dimensions that most directly determine task

transformation are:

• Problem-solving ability (cognitive reasoning, applied knowledge, and practical judgment)

• Physical necessity (physical execution, such as home construction, home repairs, plumbing and electrical work)


We might guess that the effects will extend as AI is embodied in more machines, with robotaxis and autonomous driving vehicles providing a good example. 


Tuesday, September 23, 2025

Embodied AI Will Extend the Range of Jobs and Functions AI can Displace

The new AI at Work Report 2025 published by Indeed suggests that more than a quarter (26 percent) of jobs posted on Indeed in the past year could be “highly” transformed by generative artificial intelligence apps. 


Some 54 percent of jobs are likely to be “moderately” transformed.


The study suggests 46 percent of skills in a typical U.S. job posting are poised for “hybrid transformation.” Human oversight will remain critical when applying these skills, but GenAI can already perform a significant portion of routine work.


As you might guess, software development and other cognitive functions are most likely to be affected, while jobs with high human contact, emotional intelligence or physical elements will be least affected. 


source: HiringLab.org 


Consider nursing, which is relatively immune from wholesale substitution effects. 


source: HiringLab.org 


In contrast, many more of the software development functions are likely to be affected. 


source: HiringLab.org 


Of the close to 3,000 requirements analyzed, the two dimensions that most directly determine task

transformation are:

• Problem-solving ability (cognitive reasoning, applied knowledge, and practical judgment)

• Physical necessity (physical execution, such as home construction, home repairs, plumbing and electrical work)


We might guess that the effects will extend as AI is embodied in more machines, with robotaxis and autonomous driving vehicles providing a good example.


MIT AI Report is Widely Misinterpreted

Much has been made of a study suggesting 95 percent of enterprises deploying artificial intelligence are not seeing a return on investment.


There’s just one glaring problem: the report points out that just five percent of those entities have AI in a “production” stage. The rest are pilots or limited early deployments. 


That significant gap between AI experimentation and successful, large-scale deployment arguably explains most of the sensationalized claim that “only five percent of enterprises” are seeing return on AI investment. 


It would be much more accurate to say that most enterprises have not yet deployed AI at scale, and therefore we cannot yet ascertain potential impact. 


Limited deployments or trials often do not integrate into core business workflows or provide the necessary scale to demonstrate significant financial impact, leading to the perception of widespread failure. And, in any case, up to 70 percent of all information technology projects fail to produce expected results. 


So it would be entirely normal for AI projects to fail much more often than they succeed. 


The report notes that “despite $30 to $40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95 percent of organizations are getting zero return.”


But one has to dig just a bit deeper to make sense of those figures. The report notes that “just five percent” of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.”


The report says “tools like ChatGPT and Copilot are widely adopted,” and that  “over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment.”


But then come the important caveats. Those use cases “primarily enhance individual productivity, not P&L performance,” the report says.


Meanwhile, enterprise-grade systems, evaluated by 60 percent of organizations, have only had about 20 percent in a “pilot stage.” 


Most tellingly, “just five percent” or surveyed entities have enterprise AI systems in full production and use, the report says. 


The real issue is the percentage of firms that have fully deployed AI in a business process and are seeing return on investment. The report does not address that issue, but many of us might expect failure rates of up to 70 percent for those deployments. 


The point is that many, if not most, interpretations of the report’s data are off the mark. The study does not show enterprise AI use cases are not producing ROI 95 percent of the time. The report shows that most entities have not yet deployed at scale.


So, of course measurable returns are not available. One cannot measure the impact of an innovation one has not yet deployed at scale.


Monday, September 22, 2025

"All of the Above" Helps AI Data Centers Reduce Energy Demand

Just as electrical utilities use rate differentials to shift consumer workloads to off-peak hours, so data centers supporting artificial intelligence jobs can use workload shaping to shift jobs to off-peak hours. 


"Demand-side" management options are just as important as “supply side” increases in energy infrastructure. These demand management measures include:

  • optimizing hardware

  • improving software and workload management

  • enhancing physical infrastructure.


By identifying which AI tasks are time-sensitive (such as real-time inference for a search engine) versus which are not (training a new large language model), data centers can dynamically shift computational loads. 


Non-critical tasks can be paused or slowed down during peak grid demand hours and resumed when electricity is cheaper, cleaner, or more abundant. 


In a related way, workloads can be scheduled to run when the local grid's energy mix is dominated by renewable sources like solar or wind, which can reduce overall consumption.


The AI models themselves can be designed to be more efficient.


Techniques such as quantization and pruning reduce a model's size and the number of calculations required without significantly compromising accuracy. For example, by converting a model's parameters from 32-bit to 8-bit, its energy needs can be drastically reduced.


For certain tasks, an AI model can be designed to "exit" early and provide an answer if it reaches a high degree of confidence, avoiding unnecessary processing.


Study/Source

Year

Key Findings and Impact

Duke University, Nicholas Institute

2025

Found U.S. power grids could add nearly 100 GW of new flexible load if data centers curtailed their usage an average of 0.5% of the time annually, with an average curtailment time of about two hours. This demonstrates a significant, untapped potential for integrating large loads without costly grid upgrades.

Rocky Mountain Institute (RMI)

2025

States that if new data centers in the U.S. were to meet an annual load curtailment rate of 0.5%, it could make nearly 100 GW of new load available. The study emphasizes that temporal flexibility (demand response) offers benefits for both data centers (lower energy bills) and utilities (avoiding costly infrastructure upgrades).

Google Cloud Blog

2023

Describes a pilot program where Google used its "carbon-intelligent computing platform" for demand response. By shifting non-urgent tasks, they successfully reduced power consumption at their data centers during peak hours, supporting grid reliability in regions like Taiwan and Europe.

Emerald AI (Boston University)

2025

A technology field test in Phoenix, Arizona, showed that Emerald AI's software could reduce a data center's power usage by 25% during a period of peak electricity demand while maintaining service level agreements. The study highlights the potential of AI-driven strategies to dynamically adjust power usage and transform data centers into "virtual power plants."

174 Power Global

2025

Discusses how smart grid integration allows data centers to participate in demand response programs. It notes that facilities can shift computational workloads based on energy availability and cost, for example, by increasing processing for non-time-sensitive tasks during periods of high renewable energy generation.


As racks become denser with high-performance GPUs, liquid cooling systems can be up to 30 percent more energy-efficient than air cooling.


Separating the hot air exhausted by servers from the cold air intake also helps. By using "hot aisle/cold aisle" layouts with containment panels or curtains, data centers can prevent the air from mixing, allowing cooling systems to run less frequently.


“Free Cooling” in colder climates takes advantage of favorable outdoor temperatures. A data center can use outside air or water to cool the facility, bypassing mechanical chillers and significantly reducing energy consumption. 


Optimized Uninterruptible Power Supply (UPS) systems can reduce electrical losses by bypassing certain components when utility power is stable.


Data centers additionally can generate their own power using on-site solar, fuel cells, or battery storage. This keeps the data centers “off the grid” during peak demand on electrical utility networks. 


Server power capping limits the maximum power a server can draw, preventing  over-provisioning.


The point is that there always are multiple ways data centers can optimize their power usage to reduce electrical utility demand.


Sunday, September 21, 2025

When is AI Not a Threat to Critical Thinking?

All of us frequently see or hear warnings that using artificial intelligence might reduce a person’s critical thinking skills (creativity, problem-solving, and independent thinking). We might simply point out that such skills are unevenly distributed in any human population, in any case.


That noted, there is some research indicating that AI tools are creating new behavioral patterns through cognitive offloading mechanisms, rather than simply amplifying pre-existing critical thinking tendencies.


In other words, using AI can reduce critical thinking skills as people offload such activities to the AI apps. But much of that offloading might be situational. For example, some studies might suggest that heavy use of AI leads to diminishing critical thinking skills.


But light to moderate use might not have such effects. Individuals who already engage in balanced, thoughtful technology use may indeed maintain their critical thinking skills, while those prone to over-reliance experience more significant impacts.


So the emerging picture might be nuanced. AI appears to both reflect existing individual differences (some people are more prone to over-reliance) and actively create new cognitive patterns through offloading mechanisms.


As always, the use of any tool or technology depends on the circumstances of its use. It is how individual humans use the tools that determines outcomes.


Ethical and moral frameworks might be embodied in law, for example, but how tools get used also hinges on the actual behavior of the humans who use the tools and technology. 


And, obviously, any technology will have costs or externalities and unintended consequences.


Technology / Tool

Good Use Example

Bad Use Example

Unintended Consequence

AI (General)

Diagnosing diseases[4]

Creating deepfakes[6]

Privacy loss, bias propagation[6][2]

Smartphones

Emergency alerts[3]

Cyberbullying[3]

Addiction, attention erosion[3][8]

Social Media

Connecting support groups[3]

Spreading hate speech[3]

Misinformation, polarization[3]

Surveillance Cameras

Enhancing public safety[7]

Invasive monitoring[7]

Trust erosion, discrimination[7][2]

Water/Carbon-Intense Manufacturing

Renewable energy storage[11][5]

High-pollution processes[5]

Resource depletion, water stress[1][5]

Genetic Engineering

Disease resistance in crops[4]

Bioweapon development[4]

Ecological imbalance[4][8]

Autonomous Vehicles

Traffic safety[3]

Algorithmic favoritism[3]

Job displacement, new liability models[3][2]

Online Learning

Global education access[4]

Cheating facilitation[4]

Educational gaps, critical thinking erosion[2]


It might also be fair to point out that most studies of the impact of AI on critical thinking skills are recent (2024-2025), with limited longitudinal data. So we really do not know what the long-term effects might be. 


Also, there is the issue of how we define "critical thinking." Most definitions revolve around the ability to reflect on the use of sources. Critical thinking is an intellectually-disciplined process of actively and skillfully analyzing, evaluating, and synthesizing information. 


So the issue includes an AI engine user's inability to analyze information sources the user has no access to, as the AI engines have done the assembling. That does not mean the user cannot think about the logic and reasonableness of conclusions or statements; apply context and knowledge to any such statements. 


But the ability to critique and choose information sources picked by the AI engine is more difficult, unless the queries include some requirement for annotation and reference to the websites from which conclusions were developed. 


If critical thinking is the ability to make informed, reasoned judgments rather than simply accepting information at face value, lack of access to sources does matter. 


There are other critical thinking elements that lack of access to original sources might compromise:

  • Analysis and Interpretation: the ability to break down complex information into smaller parts, identify underlying assumptions, and understand the relationships between different ideas. It includes discerning the main idea, recognizing patterns, and clarifying the meaning of data or arguments. 

  • Evaluation and Inference: assessing the quality of information, arguments, and evidence. Critical thinkers evaluate the credibility of sources, recognize biases, and determine whether conclusions are logically supported. Inference is the skill of drawing reasoned conclusions from the available evidence.

  • Problem-Solving: ability to define a problem clearly, gather relevant information, brainstorm potential solutions, evaluate the pros and cons of each, and implement the best course of action.

  • Self-Regulation and Open-Mindedness: the ability to monitor and correct your own thinking. It means being aware of your own biases, being willing to challenge your beliefs, and being receptive to alternative viewpoints. 


Of all these elements, it is the inability to consistently and transparently determine the credibility of sources and biases which is most obviously a limitation. Least affected is the ability to retain an open mind. 


And critical thinking is not a universally necessary skill for all activities. Critical thinking is essential for high-stakes, complex, and novel tasks. These are situations where a person cannot rely on habit, routine, or a simple set of instructions.


Complex problems without a clear solution also are important instances where critical thinking skills like analysis and evaluation are crucial. 


Critical thinking likewise is vital for making significant decisions where the outcome has serious consequences. 


Critical thinking also often drives innovation and creativity.


Still, it still is likely that the biggest danger is the inability to evaluate sources used by an AI engine. 


On the other hand, there are many instances where critical thinking really is not important. 


For many routine, low-stakes, or repetitive tasks, a loss of critical thinking ability may have minimal or no negative impact. 


Also, structured learning, especially at a basic level might hinge more on  memorization and recall rather than analysis.


In these low-stakes or routine scenarios, offloading cognitive tasks to AI or relying on established patterns does not pose a significant risk, we might argue.


Yes, Follow the Data. Even if it Does Not Fit Your Agenda

When people argue we need to “follow the science” that should be true in all cases, not only in cases where the data fits one’s political pr...