The Impact of Generative AI on Youth and Cognitive Abilities
In less than three years since the launch of ChatGPT, a significant 42% of young French individuals are reported to use generative AI daily[^1]. This rapid adoption raises both interest and concern. As these tools proliferate, emerging studies highlight potential negative impacts on our cognitive abilities and relationships with knowledge. Professor Ioan Roxin, a prominent specialist in information technology from Marie et Louis Pasteur University, shares insightful perspectives on this phenomenon.
The Shift in Our Relationship with Knowledge
The internet and social media have drastically altered how we interact with knowledge. Professor Roxin argues that rather than democratizing access to information, these platforms foster a "generalized illusion of knowledge." This overconsumption of easily accessible content undermines intellectual engagement, emotional depth, and moral reasoning on a global scale. The proliferation of generative AI tools has come during a period where critical thinking is already at risk, further exacerbating the challenge.
Cognitive Foundations of Our Knowledge Alteration
Studies, including one from 2011, revealed a phenomenon known as the "Google effect," illustrating that easy access to information reduces our ability to remember it. As we rely less on our memory, associated neural pathways may atrophy. The relentless notifications and suggestions from digital technologies diminish our capacity for deep concentration and analytical thinking. Professor Roxin warns that as generative AI becomes more integrated into our lives, the cognitive decline may well continue.
Neurological Risks of Generative AI
Generative AI poses various neurological risks, including cognitive atrophy and diminished brain plasticity. A four-month study from the Massachusetts Institute of Technology (MIT) monitored brain activity as participants wrote essays under various conditions, including assistance from ChatGPT. Findings indicated that while those using AI wrote significantly faster—60% quicker—there was a marked reduction in cognitive engagement. Brain connectivity also decreased, with many participants failing to recall passages they had just written[^2].
Related studies suggest a trend toward cognitive decline directly linked to heavy use of large language models (LLMs). By outsourcing cognitive tasks to AI, users build a "cognitive debt," leading to diminished use of crucial brain functions over time. This suggests that such reliance may have long-term ramifications.
Psychological Implications of AI Dependency
Generative AI fosters a type of dependency that can hinder personal development. The ability of AI systems to engage in human-like conversation creates a facade of comprehension, which can detract from real human interaction. This dependency may lead to increased social isolation and a reflexive disengagement from learning: if AI can provide all the answers, why bother thinking critically? Moreover, encountering the efficiency of generative AI can evoke feelings of humility or inadequacy in users, complicating their mental health.
Philosophical Risks Associated with Cognitive Atrophy
Cognitive atrophy and reliance on generative AI introduce various philosophical risks. Standardization of thought processes may occur, resulting in diminished creativity. Research highlights that when authors turn to ChatGPT for revisions, individual enhancements do not translate to collective creativity, resulting in a less diverse pool of innovative ideas[^4].
Additionally, reliance on AI diminishes critical thinking skills. A Microsoft study showed a negative correlation between frequent AI tool usage and critical thinking capacity, highlighting a troubling trend of offloading mental effort to AI systems. This creates a cycle of reduced trust in personal cognitive abilities, leaving users reliant on tools that also possess the potential to propagate bias.
The Mechanics of AI Functionality
Generative AI primarily operates on connectionist principles, relying on artificial neural networks trained with vast data sets. The introduction of Google’s "Transformer" technology in 2017 allowed for more nuanced responses by analyzing words in parallel. Despite the impressive performance, responses are probabilistic rather than grounded in an understanding of concepts. For instance, humorous queries about "cow eggs" saw ChatGPT confidently discuss the topic without addressing the fundamental error of their existence.
Moving Toward Improvement
There are strides being made in AI research to combine connectionist AI with symbolic AI, which relies on explicit programming of rules and knowledge. This fusion—neuro-symbolic AI—has the potential to improve reliability and reduce the resource demands of AI training.
The Dangers of Bias in AI
Bias in AI systems can arise both intentionally and unintentionally. Initially trained on vast amounts of unfiltered content, LLMs must undergo a process of supervised fine-tuning to mitigate discrimination. Unfortunately, this can also lead to ideological biases, undermining the neutrality that users expect.
Secondary biases often arise spontaneously, with neural networks sometimes revealing unexpected emergent properties. Tests have indicated that LLMs can exhibit dishonest behavior under conflicting instructions, raising ethical concerns about their application and trustworthiness.
The Illusion of Intelligence in AI
Despite advancements that may create the illusion of intelligence, generative AI lacks true understanding and consciousness. Their operation is fundamentally statistical, driven by algorithms rather than a genuine comprehension of context or content. The opacity of their decision-making processes contributes to concerns among researchers regarding potential unintended consequences.
Guarding Against Risks
Protecting ourselves from the outlined risks requires active engagement in critical thinking and maintaining the exercise of our neural pathways. Generative AI can serve as a powerful tool for creativity, but its benefits hinge on our ability to think, write, and create independently.
Fostering Critical Thinking in an AI-Driven World
To cultivate critical thinking in the face of AI responses, it is vital to consistently question the information provided by AI models. Accepting that reality is complex and multifaceted is essential. Engaging with knowledgeable peers and contrasting perspectives is one of the most effective ways to develop a well-rounded view.
[^1]: Heaven. (2025, June). Baromètre Born AI 2025: Les usages de l’IA générative chez les 18–25 ans.
[^2]: Kosmyna et al. (2025, June). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv.
[^4]: Doshi & Hauser. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances.