top of page

When a Chatbot Feels More Human Than Your Medical Provider and What We Can Do About It

  • Writer: Dr. Stephanie Shelburne
    Dr. Stephanie Shelburne
  • 7 days ago
  • 6 min read
Robot empathy

I realize the title of this article might be a little provocative but stay with me. I was listening to a podcast the other day in which the speaker stated that the most necessary skill of our time for business success is radical empathy. I couldn’t agree more. So, in my head I nodded, finished listening and moved on with my day. A little bit later that same day, as I was reviewing some research for a project I’m working on, I came across a piece that stopped me.


2025 systematic review and meta-analysis published in the British Medical Bulletin, led by researchers at the Universities of Nottingham and Leicester, analyzed 15 studies comparing the empathy of AI chatbots to human healthcare professionals. Thirteen of those fifteen studies rated AI higher. The pooled data showed that a randomly selected AI response had a 73% probability of being rated more empathic than a physician's. Searching furether, I found this landmark 2023 study by Ayers and colleagues in JAMA Internal Medicine, which sharpened the picture further, only 4.6% of physician responses were rated empathic, compared to 45.1% for ChatGPT.

Now my mind perked up, not just because of all the empathy talk already floating around in the cultural sphere, but because of the placement of AI as the leading source of it. Wait...What?


Let’s think this through. I’m aware that there is a kind of quiet disappointment many of us carry out of medical appointments. It can even feel something like deflation. It definitely feels like frustration. You go to your appointment, you prepare, you wait, sometimes for weeks, and then when you are in, it’s for all of twelve to fifteen minutes. You end up leaving with a prescription and very little sense that anyone actually heard you. The medical provider was more than likely very kind or pleasant, but still, something essential is missing. Something you can’t quite name.  


I know this experience from both sides. I have been the patient in the room, navigating a medical system that, despite its best intentions, often could not meet me where I actually was. And I have been the practitioner, working within that same system, feeling the structural pull against the kind of presence I knew my patients and clients needed. That double vantage point is what makes this conversation feel so urgent to me, and so worth having.


For me, and many that I have spoken with about AI, the reflex upon reading the AI finding is alarm. Proof that we are surrendering something precious to the machines. But, I want to suggest a different frame. This finding is not primarily about AI. It is a mirror held up to what has gone missing from the human encounter itself, and what it reflects is worth looking at clearly. In an earlier piece I argued that the upgrade AI actually requires from us is not a new skill set but for us as humans to become more coherent organisms. The empathy data is one specific instance of that larger truth, and a particularly clinical one.


Let's be honest about why AI feels empathic. The responses are longer, they acknowledge feelings before jumping to solutions, and they do not glance at a clock. In a system where providers spend nearly two hours on documentation for every one hour with patients, even the simulation of being heard registers in the body as relief. When you are perpetually thirsty, even a photograph of water looks appealing.


But the turn that matters most, and the one lost in most headlines about this research, is this: what feels like empathy in text is not what empathy actually does in the body. Every study in the Howcroft meta-analysis evaluated language, words rated by observers for warmth and acknowledgment, and none of them measured what happened inside the patient. None of them tracked heart rate, cortisol, or autonomic state, the physiological events that determine whether a person can actually heal. That is not a minor methodological gap. It is the entire question.


Stephen Porges's Polyvagal Theory gives us the language for what real clinical empathy does. The human nervous system is constantly scanning the environment for safety or danger through a process Porges called neuroception, a detection system that operates below conscious awareness, registering cues before the thinking mind has any opinion about them. When a regulated, attuned clinician enters the room, their voice tone, facial expression, and the steadiness of their presence transmit information directly to the patient's autonomic nervous system, and a clinician whose own ventral vagal complex is online can activate the patient's, shifting them out of threat response and into the physiological state where disclosure, trust, and healing become possible. Safety is not only a feeling. It is a biological transmission between two living systems.


And the transmission is wider than the autonomic alone. What moves between a regulated clinician and a patient registers across the whole of a person, the physical system settling, the mental field clearing, the emotional system lowering its guard, the soul-level recognition of being met rather than processed, and underneath all of it the quieter sense of being held inside a larger coherence than oneself. Polyvagal theory names the physiological floor beautifully. The clinical encounter lives on every floor at once.


The clinical outcomes data bears this out in ways that are genuinely difficult to overstate. A 2024 cohort study in JAMA Network Open followed 1,470 adults with chronic low back pain across 12 months and found that patients of highly empathic physicians had significantly better outcomes in pain intensity, function, and quality of life, outcomes that exceeded those of opioid therapy and lumbar spine surgery. What produced those results was not word choice. It was what happened between two nervous systems in the same room. A 2008 study on autonomic failure drove the mechanism deeper still, finding that patients with disrupted autonomic function showed measurably impaired empathy capacity, confirming that empathy is not a social attitude or a communication skill but a physiological event that requires a functioning, regulated nervous system in the person offering it.


This is what makes physician burnout not simply a workforce problem but a clinical crisis hiding in plain sight. AMA data shows that more than four in ten providers report experiencing at least one significant symptom of burnout, and a burned-out provider is not simply tired. They are a dysregulated nervous system attempting to deliver care, and a dysregulated nervous system cannot transmit what it cannot access. The biological cue of safety that a patient's system needs to shift out of threat response is simply not available in that encounter, and this is not a moral failure on the part of the physician. It is the predictable outcome of a system that has stripped away the conditions that make genuine healing presence possible.


Most medical providers entered medicine because they care deeply, and I say that as someone who has worked alongside them and felt the weight of what the system asks of them. Patients, biologically wired across millions of years of evolution to seek co-regulation from other humans, are left looking for it wherever they can find it, including in a chatbot. The tragedy is not that AI is seductive. The tragedy is that the system has created such a deficit of genuine regulated presence that a language model can fill the gap.


This is where bioenergetic medicine has something essential to contribute, something the conventional AI-and-empathy conversation almost entirely misses. Bioenergetic coherence is not a supplement to clinical skill. It is the substrate that makes clinical empathy physiologically possible in the first place. A practitioner whose own nervous system is regulated and present can transmit safety and create the conditions for genuine co-regulation to occur. A practitioner running on depletion cannot do this, regardless of how much they care or how skilled they are in every other dimension of their work. And a patient whose system is chronically dysregulated cannot fully receive care, even when something real is being offered. The quality of any healing encounter is always a function of the coherence both people bring into the room.


Safety is not the destination. Safety is what makes the destination possible. When two regulated nervous systems actually meet, something beyond the relief of no-longer-under-threat becomes available, the patient's own coherence coming online, the capacity to disclose what has never been said aloud, the unmistakable sense of being fully alive inside one's own life. That is the threshold AI cannot cross. It is not a question of what words can be generated. It is a question of what two living systems can do together in a room, and whether anyone in the encounter has the coherence to receive it when it happens.


This is why NESBEM exists, to train practitioners, providers, and leaders who understand that their own coherence is not incidental to their clinical effectiveness but is its very substrate. As AI grows more fluent in the language of empathy, what becomes more valuable, not less, is a present, regulated, attuned human being who can meet another living system with genuine safety, and then, together, cross into what only two coherent humans can make possible.


For clinicians and leaders reading this, the place to start is not with a new technique. It is with the honest question of whether the system you are working inside has left you with enough regulated nervous system of your own to give. Your coherence is not a professional nicety. It is the medicine.



Dr. Stephanie Shelburne is Executive Director of The New England School of Bioenergetic Medicine (NESBEM) and creator of Your Sacred Metabolism®.

 
 
 

Comments


bottom of page