Every year, health system leaders review clinician engagement and burnout survey results with a familiar mix of hope and resignation. The numbers are rarely great, but they’re often “not terrible,” which can feel like a small win in an environment defined by constant pressure. A few slides later, leadership moves on, comforted by the sense that the organization at least understands where things stand. The problem is that this sense of clarity may be entirely illusory. Increasingly, the clinicians most at risk (those most burned out, most disengaged, most likely to leave) may not be responding at all.
A recent article in the Journal of Healthcare Management examining nonresponse bias in healthcare workforce surveys brings this uncomfortable reality into focus. The authors show that when leaders assume survey respondents represent the entire workforce, they are often wrong. Nonresponse is not random, and it is not benign. In fact, the people who opt out of these surveys frequently look meaningfully different from those who respond, with consequences that extend well beyond data quality.
What nonresponse actually signals
At a high level, the study demonstrates that clinicians who do not respond to engagement and well-being surveys tend to have worse objective outcomes than those who do. Nonrespondents were more likely to leave their organizations and, in some cases, showed lower productivity than their responding peers. These are not abstract findings. They map directly to the issues that keep executives up at night: retention, staffing instability, and the escalating cost of replacing experienced clinicians.
It is tempting to dismiss this as a methodological nuance, something to be handled by survey designers or analysts. That would be a mistake. Nonresponse bias is not primarily a statistical problem; it is a leadership problem. When clinicians stop responding, they are sending a signal about trust, engagement, and expectations. Often, that signal reflects a belief that participating will not lead to meaningful change.
The authors frame this dynamic using social exchange theory, which translates well into the realities of healthcare work. Clinicians engage when they believe the exchange is fair, when giving their time and perspective results in action. In organizations where survey shave historically led to little visible improvement, silence becomes a rational response. Low response rates, then, are not apathy. They are evidence of learned skepticism
The silent cohort leaders can least afford to ignore
One of the most striking aspects of the study is who tends to go quiet. Advanced practice providers, early- and mid-career clinicians, and those not nearing retirement showed particularly concerning nonresponse patterns. These groups are not peripheral to healthcare delivery. They are often among the most mobile, the most operationally essential, and the most expensive to replace.
This leads to a counterintuitive but important insight: nonresponse itself may be a leading indicator of attrition risk. A clinician who is exhausted but still hopeful may complete a survey. A clinician who has disengaged emotionally or begun planning an exit may not. In that context, arise in nonresponse rates is not just missing data; it is an early warning signal that organizational strain may be reaching a tipping point.
For leaders accustomed to focusing on average scores and percentile rankings, this requires a shift in perspective. Silence may tell you more about workforce stability than the answers you do receive.
Why surveys alone are no longer enough
The implications of this research extend beyond survey design. They challenge the way healthcare organizations measure clinician experience altogether. In nearly every other domain (think quality, safety, or finance)leaders rely on multiple data sources to understand complex systems. Clinician well-being, by contrast, is often reduced to a handful of survey items interpreted in isolation.
The authors suggest augmenting surveys with objective data already available within health systems, such as HR and EHR signals. Patterns in PTO usage, after-hours EHR activity, inbox burden, and turnover trends can provide essential context, especially when survey participation is low. Used appropriately, these data do not replace the clinician's voice; they help interpret its absence.
This approach aligns with broader trends in clinical informatics and operations, but it comes with risks. Without careful governance, transparency, and trust, objective data can feel like surveillance rather than support. If clinicians believe these signals will be used punitively, disengagement will deepen rather than improve.
One size does not fit all
Another key takeaway from the study is the danger of assuming uniformity across roles and career stages. Different clinicians experience work and organizational stress in fundamentally different ways. Applying the same survey instrument, on the same schedule, with the same expectations across all roles virtually guarantees uneven participation and uneven insight.
For executives, this means rethinking engagement measurement as a designed experience rather than an administrative task. Role-specific approaches, tailored communication, and visible follow-through matter. Treating surveys as static tools rather than evolving products is a recipe for declining relevance.
Listening requires more than asking
The most important lesson from this research is deceptively simple. Listening is not just about asking questions; it is about paying attention to who is speaking, who is not, and why. In a healthcare system facing worsening burnout and increasing workforce instability, ignoring silence is no longer a neutral choice.
If the clinicians most likely to leave are not responding, the problem is not that the dashboard is incomplete. The problem is that it maybe actively misleading. For physician leaders and healthcare executives, learning to hear what isn’t being said may be one of the most important leadership skills in the years ahead.