When my wife's Instagram warning turned into a wake-up call: My response to "Your Brain on ChatGPT"

My wife's Instagram warning about AI making us dumber led me to MIT research showing ChatGPT users lose cognitive abilities. The data is stark, but the solution isn't abandoning AI, it's using it better.

When my wife's Instagram warning turned into a wake-up call: My response to "Your Brain on ChatGPT"
Photo by Pawel Czerwinski / Unsplash

Last week, my wife shared an Instagram story about AI making us dumber. Then I saw it again from a mutual friend, warning against too much LLM use. I chose not to comment then because I couldn't give it the treatment it deserved. Plus, I knew my wife was continuing her campaign against my ChatGPT habit.

This is my response to that.

A recent MIT study called "Your Brain on ChatGPT" provides data for what many of us already suspect: AI assistants might be making us more efficient but less capable thinkers. The researchers used EEG scans to measure brain activity while participants wrote essays over four months, and the results are worth paying attention to.

The study compared three groups: brain-only writers, search engine users, and ChatGPT users. The brain-only group showed the highest neural connectivity, the kind that indicates deep cognitive engagement. Search engine users showed less. ChatGPT users showed the least. The tool that did the most cognitive work produced the least brain work from users.

The immediate consequences were telling. When asked to quote a sentence from essays they'd just written, 83% of the LLM group couldn't do it correctly. The other groups had near-perfect recall. The LLM didn't just erase ownership over the work, it produced generic, homogeneous text. These effects appeared instantly, even in first-time users.

What makes this study particularly compelling is the fourth session, where researchers had LLM users try brain-only writing and brain-only users try ChatGPT. The LLM-to-brain participants showed weaker neural connectivity and "under-engagement of alpha and beta networks." Meanwhile, brain-to-LLM participants demonstrated higher memory recall and re-engagement of brain regions. The researchers found LLM users performed worse than brain-only users "at all levels: neural, linguistic, scoring."

The flaws don't invalidate the findings

The study has obvious limitations. Small sample size, elite Boston university students, artificial time constraints. This isn't how professionals actually use these tools, where iteration and deeper engagement are possible.

But the conclusions feel right. They give data to something many of us sense. While the experimental setup might be imperfect, it points to a real risk: we're outsourcing our thinking, and it's costing us.

This isn't another calculator debate (but we've been here before)

We've actually faced this exact dilemma before. Multiple times.

When calculators became widespread in the 1980s, research showed remarkably similar patterns to what we're seeing with LLMs today. Students who used calculators heavily showed decreased mental math fluency. Brain imaging revealed reduced activation in arithmetic-related regions. Working memory engagement for numerical operations declined.

The same thing happened with phone numbers. Pre-mobile era, people routinely memorized 50+ phone numbers. Post-mobile, most of us can barely recall five numbers without our phones. Studies show reduced hippocampal engagement in memory tasks among heavy smartphone users.

Google created another shift. Research by Sparrow and others found people show reduced memory for information they expect to access later. We developed "transactive memory," remembering where to find information rather than the information itself. Brain scans showed reduced encoding activity when people expected external access.

So yes, we've redistributed our mental load before. We offloaded arithmetic to calculators, spatial memory to GPS, factual recall to search engines. Each time, we told ourselves we were freeing our minds for higher-order thinking. Clay Shirky's work on "Cognitive Surplus" promised that offloading routine tasks would create mental space for creativity and collaboration.

But this time is genuinely different, and we need to be more careful.

A calculator handles one discrete function: computation. Google retrieves stored information. An LLM, when used for writing, takes over the entire complex process of synthesis, structuring, and creation. Unlike basic arithmetic or phone number recall, essay writing involves higher-order cognitive skills (argumentation, synthesis, creativity) that are fundamental to critical thinking itself.

The MIT study's EEG data supports this distinction. It doesn't show surgical removal of a low-level task. It shows a systemic power-down of cognitive networks responsible for deep thought. As the researchers noted, "brain connectivity systematically scaled down with the amount of external support."

The evidence suggests we're not seeing the expected reallocation to higher-order thinking that happened with previous tools. Instead, we experience what researchers call "metacognitive laziness." People don't reinvest freed cognitive resources into more sophisticated tasks. During the study, participants' LLM use degraded from thoughtful collaboration to simple copy-pasting. Without deliberate effort, we naturally slide toward passive consumption.

But here's the thing: not all is lost. We can learn from those previous transitions. Each time we offloaded cognitive functions, we eventually developed new skills to compensate. The key is being intentional about it this time.

The cognitive debt spiral

The MIT researchers chose their terminology carefully. They call this "accumulation of cognitive debt." We're not simply evolving or declining. We're experiencing cognitive redistribution. We're trading core competencies in reasoning and synthesis for new, tool-dependent skills in AI supervision. This creates a dangerous feedback loop:

  1. We use AI to bypass critical thinking
  2. Our ability to think critically weakens
  3. We become less able to evaluate AI output
  4. We grow more reliant on the AI
  5. The cycle repeats, debt accumulates

The central paradox: the skills needed to thrive in an AI world (synthesis, critical evaluation, systems thinking) are exactly what current usage patterns seem to erode. The brain-only group in the study showed the highest alpha-band connectivity, associated with creative ideation. LLM users showed reduced divergent thinking patterns and what appeared to be a "passive consumption" mode.

A path forward: cognitive cross-training

This isn't a call to abandon these powerful tools. It's a call for new cognitive hygiene. We can't afford to be passive consumers of AI-generated content. We must become active, mindful collaborators.

The solution requires conscious development of what I call cognitive cross-training. It's structured approaches to maintaining and building our thinking muscles while learning to work with AI effectively.

Protected cognitive spaces. Just as we train our bodies at the gym, we must deliberately schedule time to think, write, and create without our powerful tools. This isn't nostalgia. It's necessity. The study shows that unassisted thinking engages neural networks that atrophy when we constantly offload to AI.

Cognitive load management. We need to become expert supervisors of our AI assistants, constantly auditing output for bias and hallucinations. But more importantly, we need to learn when to offload tasks and when to engage deeply. The key is intentional choice rather than default reliance.

Resistance capacities. This means deliberately strengthening rather than bypassing memory, maintaining attention without tool interruption, and building cognitive reserve through unassisted practice. Think of it as building immunity against over-dependence.

Verification skills. We need to develop source consciousness. That's tracking whether ideas originated from us or AI. The study found LLM users lost the ability to distinguish their thoughts from AI output. We must also learn hallucination detection and bias recognition for both human and AI-generated content.

What's ahead: new cognitive skills for the AI age

The future requires a new framework of cognitive abilities. We're not just preserving old skills. We're developing entirely new ones:

Meta-cognitive supervision. The study notes LLM users shifted from generating content to supervising AI-generated content. This supervision skill becomes critical. Knowing how to extract valuable outputs, combine multiple AI perspectives, and maintain quality control.

Synthesis across hybrid sources. We need to learn how to combine and critically evaluate outputs from multiple AI systems, integrate them with human insight, and create something genuinely new. This requires the very synthesis skills that AI use tends to weaken.

Human-AI collaboration patterns. Understanding when to use AI versus when to rely on human cognition isn't intuitive. It requires developing new mental models for partnership rather than replacement.

Authenticity detection and creation. As AI-generated content becomes indistinguishable from human work, the ability to create and recognize genuinely human insights becomes a competitive advantage.

The challenge isn't the tool itself. It's our relationship with it. We must shift from outsourcing to augmentation, using AI not to replace our thinking, but to sharpen it. This requires integration protocols: structured alternation between assisted and unassisted work, cognitive warm-ups before complex tasks, and reflection practices to analyze our thinking patterns.

The fundamental tension remains: the skills we most need to develop are precisely those that AI tool use appears to degrade. But conscious, deliberate choices about cognitive development can help us evolve new capacities rather than simply decline.

My wife might be onto something. But the solution isn't to abandon these tools. It's to learn how to use them while staying fully human.