All of us frequently see or hear warnings that using artificial intelligence might reduce a person’s critical thinking skills (creativity, problem-solving, and independent thinking). We might simply point out that such skills are unevenly distributed in any human population, in any case.
That noted, there is some research indicating that AI tools are creating new behavioral patterns through cognitive offloading mechanisms, rather than simply amplifying pre-existing critical thinking tendencies.
In other words, using AI can reduce critical thinking skills as people offload such activities to the AI apps. But much of that offloading might be situational. For example, some studies might suggest that heavy use of AI leads to diminishing critical thinking skills.
But light to moderate use might not have such effects. Individuals who already engage in balanced, thoughtful technology use may indeed maintain their critical thinking skills, while those prone to over-reliance experience more significant impacts.
So the emerging picture might be nuanced. AI appears to both reflect existing individual differences (some people are more prone to over-reliance) and actively create new cognitive patterns through offloading mechanisms.
As always, the use of any tool or technology depends on the circumstances of its use. It is how individual humans use the tools that determines outcomes.
Ethical and moral frameworks might be embodied in law, for example, but how tools get used also hinges on the actual behavior of the humans who use the tools and technology.
And, obviously, any technology will have costs or externalities and unintended consequences.
Technology / Tool | Good Use Example | Bad Use Example | Unintended Consequence |
AI (General) | Diagnosing diseases[4] | Creating deepfakes[6] | Privacy loss, bias propagation[6][2] |
Smartphones | Emergency alerts[3] | Cyberbullying[3] | Addiction, attention erosion[3][8] |
Social Media | Connecting support groups[3] | Spreading hate speech[3] | Misinformation, polarization[3] |
Surveillance Cameras | Enhancing public safety[7] | Invasive monitoring[7] | Trust erosion, discrimination[7][2] |
Water/Carbon-Intense Manufacturing | Renewable energy storage[11][5] | High-pollution processes[5] | Resource depletion, water stress[1][5] |
Genetic Engineering | Disease resistance in crops[4] | Bioweapon development[4] | Ecological imbalance[4][8] |
Autonomous Vehicles | Traffic safety[3] | Algorithmic favoritism[3] | Job displacement, new liability models[3][2] |
Online Learning | Global education access[4] | Cheating facilitation[4] | Educational gaps, critical thinking erosion[2] |
It might also be fair to point out that most studies of the impact of AI on critical thinking skills are recent (2024-2025), with limited longitudinal data. So we really do not know what the long-term effects might be.
Also, there is the issue of how we define "critical thinking." Most definitions revolve around the ability to reflect on the use of sources. Critical thinking is an intellectually-disciplined process of actively and skillfully analyzing, evaluating, and synthesizing information.
So the issue includes an AI engine user's inability to analyze information sources the user has no access to, as the AI engines have done the assembling. That does not mean the user cannot think about the logic and reasonableness of conclusions or statements; apply context and knowledge to any such statements.
But the ability to critique and choose information sources picked by the AI engine is more difficult, unless the queries include some requirement for annotation and reference to the websites from which conclusions were developed.
If critical thinking is the ability to make informed, reasoned judgments rather than simply accepting information at face value, lack of access to sources does matter.
There are other critical thinking elements that lack of access to original sources might compromise:
Analysis and Interpretation: the ability to break down complex information into smaller parts, identify underlying assumptions, and understand the relationships between different ideas. It includes discerning the main idea, recognizing patterns, and clarifying the meaning of data or arguments.
Evaluation and Inference: assessing the quality of information, arguments, and evidence. Critical thinkers evaluate the credibility of sources, recognize biases, and determine whether conclusions are logically supported. Inference is the skill of drawing reasoned conclusions from the available evidence.
Problem-Solving: ability to define a problem clearly, gather relevant information, brainstorm potential solutions, evaluate the pros and cons of each, and implement the best course of action.
Self-Regulation and Open-Mindedness: the ability to monitor and correct your own thinking. It means being aware of your own biases, being willing to challenge your beliefs, and being receptive to alternative viewpoints.
Of all these elements, it is the inability to consistently and transparently determine the credibility of sources and biases which is most obviously a limitation. Least affected is the ability to retain an open mind.
And critical thinking is not a universally necessary skill for all activities. Critical thinking is essential for high-stakes, complex, and novel tasks. These are situations where a person cannot rely on habit, routine, or a simple set of instructions.
Complex problems without a clear solution also are important instances where critical thinking skills like analysis and evaluation are crucial.
Critical thinking likewise is vital for making significant decisions where the outcome has serious consequences.
Critical thinking also often drives innovation and creativity.
Still, it still is likely that the biggest danger is the inability to evaluate sources used by an AI engine.
On the other hand, there are many instances where critical thinking really is not important.
For many routine, low-stakes, or repetitive tasks, a loss of critical thinking ability may have minimal or no negative impact.
Also, structured learning, especially at a basic level might hinge more on memorization and recall rather than analysis.
In these low-stakes or routine scenarios, offloading cognitive tasks to AI or relying on established patterns does not pose a significant risk, we might argue.