The healthcare industry has fallen in love with the idea that generative AI can solve difficult human problems simply by talking at the humans. Many health system strategy documents now include a line about deploying chatbots for mental health, treatment adherence, or behavior change, as if cognitive reframing is something that can be downloaded from the app store. I understand the optimism. Chatbots are cheap, always available, and speak with the sort of reassuring certainty that suggests they’ve never once had to navigate a prior authorization maze. But, an uncomfortable truth has been emerging from research in socially-assistive robotics: people enjoy talking with AI, yet that enjoyment rarely translates into measurable clinical improvement.
A recent study from the University of Southern California makes this point unmistakably. Researchers used the same large language model to power both a chatbot and a small, soft, tabletop robot. College students with anxiety spent two weeks interacting with one or the other. Both experiences were engaging. Both felt conversational. But only the students who worked with the physical robot showed a reduction in psychiatric distress. Identical words, identical model, identical cognitive behavioral prompts: entirely different outcomes. For those of us in healthcare technology, this should feel like a splash of cold water. We may be investing heavily in the wrong interface.
Why embodiment changes everything
The lesson is both surprising and incredibly obvious once you sit with it: the human brain treats physical presence differently than it treats glowing shapes on a screen. We evolved to respond to embodied beings. Our social wiring is older than language, older than tools, older than digital health strategy decks. When something occupies the same physical space we do – when it moves, breathes, gestures, or simply sits with us – our neurobiology responds in ways that flat screens cannot replicate. The medium is not just the message. In healthcare, the medium is the medicine.
It’s worth pausing here because this is the part where digital health traditionally doubles down on software and ignores human behavior. The industry has spent more than a decade churning out mental health apps, CBT bots, adherence coaches, digital therapeutics, virtual companions, and every other flavor of “behavior change delivered through a beautifully designed interface.” The promise remains the same: scale, convenience, lower costs, and anytime accessibility. The results, however, have been consistently underwhelming. Engagement falls off quickly. Adherence drops sharply. Sustained behavior change rarely appears outside the clinical trial slide deck.
Why? Because behavior change is not a content-delivery problem; it is an emotional-experience problem. People don’t struggle with anxiety or diabetes or rehab exercises because they are missing the right paragraph of text. They struggle because doing hard things alone is deeply, intrinsically difficult. The USC team captured this well: humans need accountability, presence, encouragement, and social cues to persist when something feels uncomfortable. Motivation is not downloaded; it’s co-created.
A different kind of robot
This brings us back to the robots, which are intentionally low-tech and even a little whimsical. They are soft, crocheted, lightweight creatures powered by elastic bands and small motors. They do not look like humans or dogs or Star Wars extras. They look like something a thoughtful grandparent might knit, then quietly imbue with just enough motion and expression to make you believe it’s rooting for you. Their power lies not in complexity but in physicality: they exist in the room with you. And your brain knows it.
The implications for healthcare are enormous. Imagine stroke rehabilitation where a small, safe, low-cost robot helps patients persevere through frustrating exercises. Picture pediatric therapy sessions where a playful physical presence helps children on the autism spectrum practice social engagement. Consider memory-care units where residents with dementia interact with a companion that is neither overwhelming nor sterile, but simply present. And in a home-based care model, where loneliness, anxiety, and inactivity are persistent threats, the idea of a bedside or tabletop robot that provides encouragement and structure is far more plausible than sending more apps to an already crowded smartphone.
Clinical leaders often talk about “meeting patients where they are.” Increasingly, patients are not in clinics or hospitals. They’re at home, often alone, navigating chronic illness or emotional distress with a mixture of bravery and fatigue. Chatbots can talk to them. Robots can sit with them. The difference is not subtle.
The investment problem no one wants to talk about
Of course, this is the point where traditional healthcare strategists start getting nervous. Physical embodiments sound messy. They require manufacturing, distribution, maintenance, and new regulatory frameworks. They don’t fit neatly into an IT budget line item. More importantly, they challenge the assumption that digital equals scalable and physical equals expensive. But the robots in these studies cost less than many medical devices sitting unused in supply closets. And compared with the cost of untreated anxiety, poor rehab adherence, or unnecessary readmissions, the economics start to look refreshingly rational.
There’s another tension worth addressing. Investors are pouring billions into humanoid robots: towering, high-risk, high-cost machines that promise to do everything from warehouse labor to patient transport. These innovations are impressive but have little to do with the emotional scaffolding humans need to learn, heal, or change their behavior. Meanwhile, the quiet, yet clinically meaningful, category of socially assistive robotics receives a fraction of the attention. This imbalance mirrors a familiar pattern in health systems: chasing shiny objects while overlooking the deceptively simple interventions that patients actually find meaningful.
Where physician leaders go from here
What would it look like for healthcare leaders to take the embodied-AI literature seriously? It would mean challenging innovation teams to widen their aperture beyond chatbots and workflow automation. It would mean recognizing that the biggest opportunity in AI-supported care may be emotional design, not just generative intelligence. It would mean building care models that combine digital intelligence with physical presence, especially for populations where motivation and loneliness are clinical barriers, not just moral ones. And it would certainly mean asking different questions: not “What can we automate?” but “What kind of presence helps people heal?”
The truth is that healthcare already understands the importance of presence. Physicians know that the 30 seconds spent sitting instead of standing changes the perceived quality of a visit. Nurses know that touch conveys trust in ways chart messages never will. Behavioral health clinicians know that therapy is not powerful because of what is said, but because of who is saying it, how they are saying it, and where the interaction occurs. Embodied AI doesn’t replace this relational wisdom. It simply extends it into new settings where humans cannot always be.
So, as health systems consider their next wave of AI investments, it may be time to temper the enthusiasm for purely digital solutions. Generative AI will absolutely transform healthcare, but it cannot overcome the biology of human social needs through text alone. Thinking otherwise is like assuming we can solve clinician burnout with pizza parties and e-learnings.
If a soft, $500 crocheted robot can outperform a multimillion-dollar chatbot in reducing anxiety, perhaps the future of AI in healthcare will not be found in the cloud or the app store, but sitting quietly on a nightstand breathing, listening, encouraging, and offering what technology so often forgets: presence.