The growing influence of artificial intelligence (AI) on human cognition reflects humanity’s increasing reliance on automation. Automation has historically allowed societies to develop more complex economic and social structures. Much of technological progress has been driven by general-purpose technologies (GPTs), which are defined by their “significant impact on aggregate productivity growth” (Crafts 2021). Each generation of GPTs has expanded the boundary of what counts as automatable labor by redirecting human effort to higher-level tasks. AI is the next major GPT. Like its predecessors, AI increases productivity across a wide range of industries. However, it scales to an exponentially higher range of cognitive tasks. As a result, many industries are rapidly adopting AI to capitalize on its productivity gains. As humans increasingly rely on AI, how will our cognitive capacities change? This paper explores the impact and limits of AI on human cognition and evaluates what policies could mitigate potential consequences.
Section I: Current Effects of AI
Neuroplasticity is defined as “the ability to adapt… to experiences, learning, and environmental stimuli.” Grassini (2024) used environmental variation as an example of how exposure to varied stimuli supports brain health. They found that participants in diverse spatial and visual environments had stronger hippocampi, an area of the brain that supports memory formation, exploratory thinking, and pattern integration. People in monotonous environments had less active neural circuits within their hippocampi. This suggests that repetitive environments stimulate the same set of neural circuits, leading to a weakening of the hippocampus. Arguably, the “environment” Grassini studies is not limited to the physical world but could also extend to mental contexts such as thinking patterns. If so, then exclusive dependence on a single kind of mental context is a form of reduced environmental variation. One example of this is when generative AI tools are used as a cognitive aid. The complex process of ideation is replaced by a user prompting an AI chatbot, which is a comparatively more predictable task. Because these neural capacities are required for producing novel insights, overrelying on AI tools could lead to lessened neuroplasticity, which could thus reduce our capacity for innovative thinking (Xu 2024).
El Sayary et al. (2025) define digital neuroplasticity as the phenomenon where prolonged digital tool usage reshapes neural pathways. Digital neuroplasticity involves moving cognitive resources away from the task systems in the brain important for contextual integration. Digital neuroplasticity impacts the stimulus-response (S-R) system in the brain, which repeats a behavior that has been rewarded in the presence of a specific cue. This neural response promotes technological lock-in: a self-perpetuating cycle where users engage in fulfilling work less frequently, leading to inferior outputs and a decline in productivity.
Korade et al. (2024) found that an overreliance on heuristics, or mental shortcuts, eventually degrades active thinking processes and memory, making it harder to identify intellectual opportunities and implement ideas. An oft-studied example of a heuristic is the Internet. People are more likely to remember the search query necessary to find information rather than the details of the information because they expect the information to be “at their fingertips” (Sparrow et al. 2011). This means that online search engines, in general, are habitually used as external memory systems. Dahmani et al. (2020) found that offloading of memory to digital tools leads to over-engagement of the S-R learning system. This is correlated to reduced volume in the hippocampus, which is involved in episodic and relational memory. McDonald et al. (2006) find that the hippocampus specifically acts to inhibit the S-R system by preventing the S-R system from taking over too quickly by invoking contextual memory.
When the S-R system becomes dominant, habits form faster, but memory becomes weaker. AI chatbots such as ChatGPT, which many see as a successor to the internet as a memory system, can be viewed as a novel, superpowered heuristic. As individuals grow increasingly accustomed to using AI for a variety of applications, they could grow habituated to engaging their stimulus-response learning system with less self-referential processing in their hippocampus. This could mean that they could fact-check AI outputs less frequently, especially when they perceive them to be reliable and authoritative.
Multitasking involves the simultaneous performance of two or more tasks, which allows the brain to adapt to rapid changes in stimulus and process a large influx of ideas. Madore et al (2019) found that the structures in the brain responsible for task-switching, executive functioning, and cognitive control are weakened by continuous multitasking. Participants in their study who multitasked regularly performed worse than those who didn’t on tasks that required sustained attention and long-term memory. Switching between multiple stimuli continuously engages the dorsal and ventral networks, which handle task-switching. Resultingly, the volume of the frontoparietal network (FPCN), which handles executive functioning and cognitive control, decreases. The degradation of the FPCN leads us to develop a skewed view of our work quality, while overactive task networks make us feel more efficient. This leads to the production of a large amount of lower-quality output.
Recommender systems on platforms like Instagram and TikTok work by grouping users with shared behavioral patterns and opinions. As users are shown content that aligns with their existing views, they begin to believe that their worldviews are widely held and obviously correct (i.e., the false consensus effect) (Lavie-Driver 2025). In addition, social media algorithms validate users’ viewpoints by capitalizing on their belief that they “understand, validate, and support core defining features of the self” (Taylor et al. 2022). The result is cognitive rigidity as users become less versed in understanding the nuance underlying different perspectives. The neurological structure encouraging this is the basolateral amygdala (BLA), which controls emotional states and memories. It receives stimuli that balance populations of neurons encoding positive versus negative valence (Kim 2016). The interplay between these oppositional populations could be why the amygdala is continuously engaged when users oscillate between viewing content very aligned and misaligned with their views. This could also be why social media users are drawn to provocative content. As recommender systems continually receive user data to train on, they create increasingly rigid user categories, increasing the prevalence of political echo chambers online.
Section II: Future Effects
As artificial intelligence absorbs a growing share of human cognitive labor, the core question shifts from which capacities it can imitate to what cognitive capacities will remain exclusively human. Because new algorithms are continuously being developed, it is difficult to predict their long-term cognitive effects. This uncertainty has led to starkly opposed views on how AI will change human cognition in the future. Dong et al. (2024) discuss how future developments in AI will originate from advances in the field of neuroscience, therefore viewing a few of AI’s current limitations and adverse effects as temporary. Some proponents of AI argue that a potential way to circumvent the losses in cognition discussed in section I is integrating generative AI models into brain-computer interfaces (BCIs). BCIs are AI systems that take advantage of the similarity of the brain’s electrochemical structure to that of the vectors used in machine-learning algorithms (Nicolas-Alonso et al. 2012). BCIs recognize sets of patterns in brain signals and translate them into commands for connected devices. A generative AI model coupled with a BCI could theoretically allow seamless integration of human cognition and machine processing via “lossless communication.”
However, this view supposes that it is possible to understand and simulate every aspect of human cognition. Dong et al. (2024) posit that “in terms of AI, the proportion of human intelligence that can be simulated or extended is just the tip of the iceberg above the water, only including the conventional, logical, explicit, and universal consciousness and intelligence.” The extent of human consciousness is yet to be understood, much less simulated. Some argue that these cognitive aspects are impossible to encapsulate with language and reason, therefore delineating a boundary between the human self and the reality of machines.
Joseph (2025) refers to the convergence of human and artificial cognition as the algorithmic self, where the self is no longer autonomous but an assemblage of various digital influences. They argue that an increasing reliance on self-determination through digital media diminishes narrative agency. AI is increasingly used to measure various aspects of subjective experience (for example, via introspection and the measurement of physical and emotional states). This is risky because “[AI] does not capture the mess, the contradiction, that gives human narratives meaning. If we recount our digital lives in well-tuned, optimized chunks, it can flatten the richness of what we experience and prevent psychological integration” (Joseph 2025).
Some AI proponents also argue that generative AI tools are capable of outperforming humans at divergent thinking, which involves “generating multiple potential solutions to a problem” (Hubert 2024). AI tools could be used as a “springboard” to expand the range of perspectives an individual considers. For instance, AI can be used for brainstorming alternative causal reasons for events. Hubert’s definition of creativity glosses over the fact that an equally, if not more important, aspect of creativity is the ability to frame a problem and generate solutions that are uniquely novel in either application or structure. Breakthroughs often emerge from reformulating a domain rather than from producing more solutions within an existing frame. The generative models used in Hubert’s study are provided with a set problem scope, and their outputs, like those from all generative AIs, consist of a recombination of centuries of original human ideas. Inventing a machine capable of problem framing, which would require an ability to predict “historical small events,” is a completely separate venture from building an LLM. Arthur (1989) defines “historical small events” as those that are outside the ex-ante knowledge of the observer: i.e., beyond the resolving power of their mental model of a situation. A machine capable of learning from and adapting to ex-ante events would be capable of true creativity.
A study by AI Doshi et al. (2024) demonstrates that generative AI, as it stands, is not capable of divergent thinking. The effect of using AI as an aid on the originality of fictional stories was studied. It was found that the stories where writers engaged with AI were more novel and useful. However, all the stories written with AI, on average, were more similar to each other than those written without AI. Although AI use can improve an individual writer’s creative output, it can constrain the overall trajectory of societal progress. This is because it could have a homogenizing effect on collective output, limiting the appearance of significant outliers.
The present limitations of the algorithmic self result from the fact that current neuroscience cannot verbalize or quantify subtle neurological processes underlying cognition. However, the technological integration of AI into fields relevant to these areas is currently occurring faster than the neuroscientific community can decipher them. There is a significant risk of ‘narrative flattening’ if this trend continues, where the most profound, non-quantifiable aspects of human cognition, such as those underlying human introspection or creativity, are lost to future generations. Historically, a nuanced awareness of the self was important for actualization, fulfillment, and spiritual success in life. A self-actualized humanity drove innovation because intrinsic motivation helped them to invest more time and energy into their crafts. In addition, self-understanding was important for cultivating a developed and independent worldview, which impacted the decisions that individuals made.
Section III: Countering AI
Sherman et al. (2017) describe the essence of art as “its communicative nature, its capacity to encourage personal growth, and its ability to reveal deep aspects of the human condition.” Engagement with the arts is a dialogic practice where socio-epistemic information (which is highly individual and human) is exchanged. This is why art is pivotal for the development of imaginative and creative skills. Interpreting an art piece requires the understanding of the inner subjective world of the viewer and of the artist, both of whom have distinctive personality constructions defined by their genetics and unique experiences. This process cultivates what Sherman describes as “embodied understanding,” which strengthens prefrontal cortical activity, which in turn inspires a more intuitive understanding of complex situations. These capacities are essential for identifying social and technological problems that are potential opportunities for innovation.
Art also supports empathy-related processing and helps us develop a personal ethic. These ethics are contextualized by our subjective understanding of social and moral content relations. A developed personal ethic is necessary for, for instance, treating the lives of individuals as inherently valuable. Generative AI (or LLMs) are currently incapable of moral decision-making by construction: they determine patterns in language based on statistical weights. Even humans fall victim to slippery slope ethics (Coller 2019), so it is probable that AI tools, which lack self-awareness, moral intuition, and embodied experience (Xiaowei 2025), would do the same. However, AI chatbots can appear mature and logical, which can encourage users to rely on them for ethical decision-making. Individuals who do this will ultimately struggle to make their own complex ethical and professional judgments.
The neurological structures relied on to engage with art are hypothesized to be the prefrontal and cingulate cortices (Sherman 2017). The dorsomedial prefrontal cortex (DMPFC) evaluates self-referential stimuli, while the orbitomedial prefrontal cortex (OMPFC) metabolizes them and continually connects them to emotional state. Meanwhile, the posterior cingulate cortex (PCC) is involved in the integration of self-related representations in our psychology and with empathy. The same cognitive regions by which people understand others are the ones related to self-concept. These regions are instrumental for the development of the ego and of the distinction between the ego and society. Su et al. (2023) found that similar regions are vulnerable to deterioration with prolonged exposure to social media algorithms. Specifically, they found that the dorsolateral prefrontal cortex (dlPFC) and dorsal anterior cingulate cortex (dACC), systems governing self-control and awareness of one’s inner state, exhibited deactivation when exposed to continuous short video input. This means that the time spent scrolling through a social media feed actively undermines the areas of the brain that art improves. While it remains unclear whether continuous deactivation permanently disables these regions, the axiom of neuroplasticity suggests that this could be the case.
The neurological link between AI and art supports the idea that they are epistemic opposites. AI systems eschew nuance in favor of consolidating and simplifying information using pattern recognition. AI is inherently reductive and quantitative because value is assigned to information using statistical weights. Art, on the other hand, depends on a subjective human experience that resists verbalization, much less quantification. Section II of this paper contains a detailed argument on how AI reduces activation of the exact cognitive functions that art strengthens. Preference algorithms encourage cognitive rigidity, which is a phenomenon where individuals grow increasingly unaware of themselves. This is the opposite of how art promotes cognitive flexibility. In addition, algorithmic responsiveness shifts trust away from individual judgment and values toward an overrelation to algorithms in social media, and by extension, to the digital world. In complete contrast, art demands questioning the self and others, leading to a heightened awareness of an individual’s narrative reality.
Section I provided evidence of a societal cognitive decline due to a high degree of technological assimilation into daily life. This particular instance of technological lock-in is encouraged by international competition and market competition. In addition, the field of machine learning is rapidly outpacing neuroscience. If societies integrate AI into human cognition before fully understanding it, then humanity could reach a ceiling where progress becomes impossible. The majority could neuroplastically adapt to offloading higher-order tasks to AI, losing the ability to generate truly creative ideas. This will limit the likelihood that a random person will generate a societally impactful idea. The potential to innovate on a societal scale could be restricted to the highly gifted and the elites. These higher-order abilities are particularly important because they caused humanity’s explosive progress, differentiating us from our closest genetic relative and allowing us to vastly populate the Earth. If we allocate enough of our thinking to AI, we could find that the very capacities that once elevated us above other species have atrophied. What remains could be a significantly more reactive mode of existence, in which we make decisions optimized for the short-term and based only on external stimuli rather than by the reflective thought that once placed us in command of our natural environment. Individuals could lose narrative agency and become unable to make important judgements related to their purpose. Whatever the upper-bound cognitive benefits of AI will be, the long-term consequences outlined in this paper could outweigh them unless individuals and societies take precautions.
Leave a comment