Many observers worry that using artificial intelligence routinely will pose a danger of loss of critical thinking skills.
It’s an inherently difficult question to answer in part because the definition (questioning assumptions, weighing evidence, synthesizing ideas) involves activities that are hard to measure. And even if we assume some amount of those operations are routine in many job roles, for example, it remains unclear the extent to which such activities are routinely required.
We might even assume the reverse might be true: a relatively small percentage of time involved in critical thinking (formulating questions, for example) has an outsize impact on outcomes, assuming “most of the time” is required to generate answers and embody them as outputs.
Then the following question is whether “most to all” of the research is actually “critical thinking” or is something else.
That might be true for some journalists or writers, where the task of formulating questions, conducting background research or interviews and constructing a narrative are the most related to critical thinking, where often the actual writing is cognitive, but maybe not so much an example of critical thinking as “something else” (applied communication skills).
And the difference between “critical thinking” and other forms of cognitive activity can be difficult to evaluate. One almost always finds, when conducting an interview, that potentially unexpected lines of questions develop that are unexpected. The ability to recognize and follow up when that happens is a cognitive matter, but maybe not “critical thinking.”
In somewhat similar settings, such as education and learning, it is conceivable that students actually use AI to formulate questions, conduct research and then “write” the findings. In such cases, some will argue critical thinking skills are not developed so much.
Software development, for example, has traditionally involved a mix of high-cognitive and routine tasks, observers might note.
“Critical thinking” might involve:
designing algorithms or system architecture
Debugging complex issues (questioning why a system fails under load and synthesizing a fix).
Planning and problem-solving (defining requirements or adapting to shifting specs).
Code review (weighing evidence of a teammate’s approach).
But there are always other tasks that might not involve so much “critical thinking,” if at all:
Writing boilerplate code (setting up APIs or UI components using frameworks).
Invoking existing libraries/objects (e.g., calling a pre-built sorting function).
Testing and documentation (e.g., running unit tests or updating READMEs).
Meetings and administrative work.
Perhaps obviously, senior engineers arguably spend more critical thinking time than do junior engineers. The amount of critical thinking time arguably is higher for new code bases and lower for maintenance of legacy code bases.
That might be true for lots of job functions, but some functions are generally assumed to involve less critical thinking.
Perhaps the implication is that use of AI might not actually pose a danger to critical thinking (or cognitive abilities) in all work processes and roles, to the same extent. But even in roles believed to rely heavily on critical thinking (depending on how one defines the term), it is possible that though some cognitive ability is required, critical thinking is not .
Cognitive capabilities cover a broad spectrum, including memory, attention, pattern recognition, spatial reasoning and language processing, for example.
.Critical thinking (questioning assumptions, weighing evidence, synthesizing ideas), is just one skill. Many jobs lean on other cognitive skills that are essential but don’t demand the reflective, analytical depth of critical thinking.
Cognitive skills (processing and acting on information) are not all examples of critical thinking. Rote recall, quick decision-making under pressure, or motor-cognitive coordination might be more important or common
Remembering procedures or facts is cognitive but not inherently critical. It’s about retrieval, not evaluation. Staying alert to detail or multitasking is mentally taxing but doesn’t always involve weighing evidence.
Spotting trends or anomalies relies on perception and intuition more than synthesis. And following a set process uses problem-solving but rarely questions the process itself.
The point is that human freedom and choices might have more to do with retention, improvement or decay of “critical thinking skills.” Not all cognitive processes involve critical thinking, so automating many of those functions does not involve any necessary and direct impact on critical thinking skills.
And even when AI is used to assist with a critical thinking task, human agency matters. Some people will think more than others, perhaps because some people are more intellectually curious than others, for example.
AI impact on critical thinking or cognition in general is uncertain at this point, and may well only uncover pre-existing proclivities. Intellectual curiosity is not likely evenly distributed among all AI users.
No comments:
Post a Comment