The Upgrade AI Actually Requires
- Dr. Stephanie Shelburne

- Apr 14
- 6 min read

I was at dinner with friends the other night and the conversation, as it inevitably does these days, turned to AI. What struck me wasn’t the topic itself. It was the temperature in the room. Within minutes, our small group had divided into two unmistakable camps, and the feeling on both sides was intense. On one end of the table, genuine excitement: AI is going to revolutionize medicine, education, business, creativity, everything. The people who learn to use it will thrive. On the other end, something closer to grief: AI is stealing creative work, eroding attention, making us dependent on machines for things we used to do with our own minds. The people who trust it will lose themselves.
Both sides were passionate. Both sides had evidence. And I noticed something familiar in the way the conversation moved, because I’ve seen it in every room, every conference, every comment thread for the past two years. When it comes to AI, we have collectively settled into an argument between salvation and ruin, and there doesn’t seem to be much space between them.
Our dinner table is the world’s dinner table
That small group conversation is a microcosm of something much larger. The research landscape in 2026 carries the same polarity, and the evidence on both sides is real.
On the concern side: people like social psychologist Jonathan Haidt who presented data at MIT this spring showing simultaneous declines in cognition, educational attainment, and happiness coinciding with smartphone and AI adoption, arguing that the human capacity for sustained attention is being structurally dismantled. And then there’s the peer-reviewed study out of CUNY that found a highly significant association between AI tool usage and reduced critical thinking, with cognitive offloading confirmed as the mechanism. Or the five-year longitudinal study in Frontiers in Public Health which linked screen time to measurable increases in anxiety and documented the bidirectional cycle: screens disrupt sleep, poor sleep impairs regulation, impaired regulation drives more screen exposure. And in March, the World Health Organization formally declared generative AI use a public mental health concern, calling for mental health impact assessments on AI products across the board. These are not fringe voices. These are major institutions sounding an alarm and when you read the data, rightfully so.
But then there is the promise side: AI is accelerating drug discovery, expanding access to education in underserved communities, enabling people with disabilities to communicate and create in ways that were previously impossible, and giving small organizations tools that were once available only to those with massive budgets. The technology is extraordinary. Its potential to amplify human capability is genuine, and dismissing that potential is its own kind of blindness.
So, both camps at my dinner table had a valid point. But as someone who is keeping a close eye on this phenomenon, I would argue that both sides were incomplete. Because what neither side was talking about was the variable that actually determines which outcome you will end up with.
The variable nobody is naming
In my mind, here is one of the findings that reframes the entire conversation. In an MIT Media Lab study tracking brain connectivity during AI-assisted writing, researchers found that participants who were deeply engaged with the material before using AI showed no cognitive degradation. None. Their frontal-parietal networks stayed intact, their memory encoding remained strong, and they actually used AI more effectively than every other group in the study.
The participants who outsourced their cognitive effort from the start? They lost nearly half their measurable brain connectivity and could not recall what they had just written.
Same technology. Same interface. Same prompts. Radically different outcomes.
The variable was not the AI. The variable was the state of the human using it.
This is what changes the conversation. AI is not inherently destructive or inherently beneficial. It is a profoundly powerful tool whose impact is determined almost entirely by the coherence of the person wielding it. A nervous system that is engaged, regulated, and metabolically resourced meets AI and gains an extraordinary collaborator. A nervous system that is fragmented, depleted, and already overwhelmed meets the same technology and loses ground it may not get back.
Neuroscientist Anil Seth, in his Berggruen Prize essay this year, made the case from the other direction: consciousness itself is not a computational event. It is entangled with metabolism, with biological self-regulation, with the living processes that make a body a body. If Seth is right, and the weight of neuroscience increasingly suggests he is, then the quality of your engagement with any tool, AI included, is inseparable from the quality of your biological aliveness. You cannot think clearly from a dysregulated system. You cannot create meaningfully from a depleted one. The instrument matters.
This didn’t start with AI
And here is the part of the conversation that almost nobody is having. The problem did not begin when ChatGPT launched. It has been building for at least two decades.
Over the past twenty years, we have systematically devalued the very capacities that make beneficial AI use possible. We have gutted arts education, defunded programs that develop emotional intelligence and embodied cognition, replaced exploratory learning with standardized testing, and told an entire generation that the only skills worth investing in were the ones that could be measured on a rubric. We have traded depth for efficiency. We have prioritized what could be automated and deprioritized what could not: critical thinking, somatic awareness, relational attunement, creativity rooted in lived experience, the capacity to sit with complexity without reaching for a quick answer.
You could call it a counter-renaissance. A slow, largely unexamined retreat from the full spectrum of human intelligence in favor of a narrower and narrower definition of competence. And now, just as we arrive at a technological threshold that demands every dimension of human capability, we find that we have thinned the very soil it needs to grow in.
The MIT data does not just tell us that AI can harm cognition. It tells us that humans who bring their full cognitive and physiological resources to the table are protected from that harm and empowered by the technology. But what happens when an entire culture has spent two decades underdeveloping those resources? What happens when the soft skills, the embodied knowing, the capacity for deep engagement that the research says is protective have been systematically undervalued?
You get exactly the crisis we are watching unfold. Not because AI is inherently dangerous, but because we have produced a population that is, on average, less equipped to use it well.
The upgrade that matters
This is what we are about at The New England School of Bioenergetic Medicine. Not choosing a side in the AI debate. Not technophobia and not techno-utopianism. We are focused on the variable that both sides keep overlooking: the human being at the center of the equation.
We are committed to training practitioners, leaders, and educators to understand that human intelligence is not a single faculty housed in the brain. It is a whole-system phenomenon. Your Physical System, metabolically resourced and capable of sustained energy. Your Mental System, able to hold complexity, think critically, and engage with material before outsourcing it. Your Emotional System, regulated enough to feel without being hijacked, to stay relational in a world that is increasingly transactional. Your Soul System, anchored in purpose and meaning that no algorithm can supply. And your Cosmic Bridge, your connection to the living systems and rhythms that sustain all of it.
When these systems are coherent, when the whole organism is optimized and engaged, AI becomes what it should be: an extraordinary tool in the hands of a fully alive human being. When they are fragmented or depleted, no amount of technological sophistication compensates for what is missing.
The most future-ready thing you can do in 2026 is not learn another platform. It is become a more coherent organism. Bioenergetic optimization is the intelligence that will change the world.
That is not a retreat from technology. It is the upgrade the technology actually requires.
Dr. Stephanie Shelburne is Executive Director and founder of The New England School of Bioenergetic Medicine (NESBEM) and creator of Sacred Metabolism®.
References
Kosmyna, N. et al. (2026). EEG study on AI-assisted writing and brain connectivity. MIT Media Lab. (In peer review.)
Seth, A. (January 2026). “Only What Is Alive Can Be Conscious.” Noema Magazine / Berggruen Prize Essay.
World Health Organization. (March 2026). Position statement on generative AI and public mental health.
Haidt, J. (March 2026). Compton Lecture, MIT. Reported by MIT News.
Macaulay/CUNY. (March 2026). Cognitive offloading and critical thinking. Societies.
Frontiers in Public Health. (February 2026). Screen time, sleep, and anxiety: five-year longitudinal study.
Global Wellness Summit. (January 2026). “The Rise of Neurowellness.” Future of Wellness: 2026 Trends.




Comments