ChatGPT and the Mind: A Tool or a Crutch for Critical Thinking?
Al Jazeera’s The Take podcast just posed a question that is growing more and more pressing: is ChatGPT, and generative AI in general, undermining our capacity for critical thinking? The inquiry...
Al Jazeera’s The Take podcast just posed a question that is growing more and more pressing: is ChatGPT, and generative AI in general, undermining our capacity for critical thinking? The inquiry might feel premature in the era still finding its footing with the rapid incorporation of AI, but fresh research and public experiences indicate that this is not just valid, it’s required.
The debate is based on a recent experiment at the MIT Media Lab, which tracked brain activity of users as they wrote tasks with and without using ChatGPT. The findings were revealing. Users who used ChatGPT had less neural activity and interacted less intensely with content. Their writing, although refined, was more formulaic and less creative. Those who labored without using digital aids, especially those following conventional research approaches such as Google, exhibited greater brain activity, pointing toward the mental effort involved in synthesizing and structuring information being intact.
This is part of a larger trend. With AI software increasingly woven into our everyday lives, through auto-mailed emails, reading digest summaries generated by algorithms, even AI-driven decision-making, more users are getting used to dumping mental effort. The issue isn’t that AI is necessarily evil, but that its convenience can too quickly become addiction. As some researchers said, if you don’t exercise your mental muscles, you start to lose them.
This is not to say that ChatGPT is in and of itself threatening to human intellect. It’s like the calculator or spell-checker, it’s a tool. Tools, however, influence behavior. GPS, for instance, made it easier to navigate but also diminished our sense of spatial memory. AI as it stands now is influencing the way we think, write, and problem-solve, not necessarily for the best. The threat is not the tool, but the way in which we utilize it.
In the field of education, that challenge is especially real. Students increasingly rely on AI not only for assistance, but for solutions. Essays, reports, even critical analysis assignments can now be done within minutes by asking a chatbot. While that will surely increase efficiency, it jeopardizes the cultivation of more profound intellectual capabilities: questioning, comparing, interpreting, and evaluating. If education is meant to train minds, then contracting out the thinking process to machines derails that goal.
That isn’t to say that AI doesn’t have any role in education or creative work. Far from it, it provides vast potential. Employed correctly, AI has the ability to assist students in organizing their arguments, to condense complicated sources, and to try out new approaches to writing. In the workplace, it can handle routine tasks and leave more time for strategy. But only if we remain engaged in the process, if we use AI to think, not if we let it think for us.
In order to arrive there, we must have a cultural evolution in how we engage with technology. Schools, for example, must reform their curricula not to prohibit AI, but to fold it in appropriately. Assignments could be crafted to require students to critique the AI responses, compare the chatbot’s outputs to human-written works, or analyze how the tool influenced their thought processes. Exams could become more oral in form or feature in-class essays to test students’ true understanding of concepts, rather than how well they can elicit a chatbot response.
People, also, need to establish new routines. One is to type first and then refer to AI later for editing or clarification. Another is to question AI responses with follow-up questions, transforming passive intake into active interaction. We are not trying to eschew AI but rather employ it intentionally, keeping our eyes open to what we are gaining and what we could lose.
Developers and platforms can also do something. Subsequent releases of ChatGPT and other such models might have prompts that make one think, prompt people to check facts, or even point out ambiguities in what it generates. In others words, AI can be created so that it doesn’t displace critical thinking but triggers it.
The discussion awakened by The Take is a reminder that with each technological advance, new duties come. As AI grows more capable and ubiquitous, we must adjust, not only our devices, but our thinking. If we use ChatGPT as an intellectual companion instead of a surrogate, it can enhance our thinking without numbing our perception.
Ultimately, critical thinking is a skill. Any skill must be exercised. ChatGPT can be an effective tool, but only if we are in control of the thinking. The future of intelligence, either human or artificial, will be determined by how well we are able to keep that balance.


