Designing for Health: Dr. Adam Wright [Podcast]

Without open feedback from end users, any implementation will fall short of its full potential. The implementation of clinical decision support tools is no exception. Hands-on interaction with the intended audience will avoid designing for a theoretical user, not real humans using the tool. Designing with people in mind without falling prey to design-by-committee is challenging, but it is possible, and it will transform decision support into something clinicians not only use but enjoy.

Vanderbilt University Medical Center’s Dr. Adam Wright speaks with Nordic Chief Medical Officer Dr. Craig Joseph and Head of Thought Leadership Dr. Jerome Pagani. He shares what it’s like to work with a self-developed electronic health record (EHR), how the history of clinical decision support shapes how he thinks about it today, and the importance of working with user feedback before, during, and after a project’s launch, as well as his thoughts on starting fast and iterating often.




In Network's Designing for Health podcast feature is available on all major podcasting platforms, including Apple PodcastsAmazon MusicGoogleiHeartPandoraSpotify, Stitcher, and more. Search for 'In Network' and subscribe for updates on future episodes. Like what you hear? Make sure to leave a 5-star rating and write a review to help others find the podcast.

Show Notes:

[00:00] Intros

[01:04] Dr. Adam Wright’s background

[04:46] Working with a self-developed EHR

[08:37] Principles for how people and technology can work together well 

[11:24] Working with user feedback

[15:04] What clinical decision support used to look like

[17:34] Non-interruptive decision support

[21:51] The necessity of education and engagement in choice architecture

[28:34] Starting fast and iterating often

[31:48] Sustaining projects after the excitement is over

[35:56] Things so well-designed that they bring Dr. Wright joy



Dr. Jerome Pagani: Dr. Adam Wright, thanks so much for being here today. It's nice to finally have a real doctor on the podcast.

Dr. Adam Wright: Absolutely.

Dr. Jerome Pagani: Actually you're our second. We had Dr. Archana Tedone on our last episode.

Dr. Craig Joseph: So, our record of having real doctors, which I'm translating apparently for myself and our listening audiences, means a Ph.D. We have two consecutive real doctors. Am I understanding that correctly, Jerome?

Dr. Jerome Pagani: That is correct. Your words, not mine.

Dr. Craig Joseph: Excellent. So, Adam as a real doctor I wanted to confirm what I think I heard from you is that ever since you were a young child when you first got those letter blocks and you were pounding blocks together when you were a toddler, the letters Ph.D. were your favorite and you always knew from being a very young child that you wanted to be a researcher, an academician, is that true? Did I hear that correctly?

Dr. Adam Wright: It was actually HL7. So my first creation of these letter blocks was a valid HL7 2.4 message.

Dr. Craig Joseph: That is amazing, and that was probably before HL7 was even founded as an organization, so.

Dr. Adam Wright: Yeah, I’m not that old.

Dr. Craig Joseph: Prescient. Prescient. That is, that is amazing, so. Alright, well maybe you've always been destined for this life. Yeah, you went to Stanford for undergrad and majored in English.

Dr. Adam Wright: I did not. Math and computer science. The only true combination of majors.

Dr. Craig Joseph: Apparently, my research has not been great here. So, all right so math and computer science at Stanford and then you always knew you wanted to be a biomedical informatician, is that?

Dr. Adam Wright: I will confess that I had actually never heard of biomedical informatics. So, as you said I studied math and computer science and I somehow did know that I wanted to go on to graduate school and study a little bit more and I thought for a long time, you know, should I do a Ph.D. in math or a Ph.D. in computer science? But those both seemed kind of long and hard and theoretical, so I felt like I wanted to turn to an application of some kind and I had friends that went into finance or became actuaries but I was really curious about applications of math and computer science in healthcare or in in biology. I had actually never heard of medical informatics, I thought I was going to become an epidemiologist and then really largely through happenstance I heard of the field of biomedical informatics. I met this guy, Mark Musen, who's a professor at Stanford in biomedical informatics and he told me about the field, and I was like wow that's what I want to do that is fascinating. It just struck me that compared to quantitative finance there were still some kind of low-hanging fruit, some easy problems to solve that that needed hard work. It seemed like the problems were interesting. I think I found medicine and physiology and pathophysiology just to be kind of conceptually interesting. And then, you know, most important, there's kind of this pro-social aspect to it. You know, maybe if I built a good computer application of healthcare, it would help us provide better care to people as opposed to just, you know, make more money which was appealing to me.

Dr. Craig Joseph: That’s awesome. So you, how did you decide on OHSU for your Ph.D.?

Dr. Adam Wright: So a few reasons. I wanted to work with this guy named Dean Sitig who I think some of you guys know and so he was a professor there. I also had family in Portland and really like Portland, and it just seemed like an interesting place to spend a few years doing a Ph.D., so, so that's how I wound up there.

Dr. Craig Joseph: So awesome, and then what is that is that a year? I don't really know how Ph.D. schools work. Is that a year and a half, how long?

Dr. Adam Wright: It varies. I was there for 3 years, which is on the faster side.

Dr. Jerome Pagani: Wow.

Dr. Adam Wright: Yeah, somewhere five to seven is more typical but I kind of made this deal with my committee, I sort of spelled out this thing that I knew I could do but they didn't think I could do. And then I managed to do it. So we had all agreed that I could graduate if I did it.

Dr. Craig Joseph: That's great. Alright. So after you got your, I guess the best degree, the real doctor degree.

Dr. Adam Wright: The real doctor. Yes

Dr. Craig Joseph: Yeah, you moved from one coast to the other coast and were at the Brigham and one thing in our conversations that you noted earlier was that at the at the Brigham, you had a self-developed electronic health record to work with.

Dr. Adam Wright: Yeah, wild.

Dr. Craig Joseph: Yes, tell us about that and I'm interested to hear how that, because while you were there you moved to 1 from a major vendor and I'd love to hear kind of like the pre and post-analysis of ...

Dr. Adam Wright: Yeah, so this is a interesting thing that's been going on in our field of biomedical informatics, right? So so many of the initial EHRs and a lot of the early research in electronic health records and decision support was done at places that developed their own EHR software. So the Brigham, big teaching hospital in Boston, was one of the earliest adopters of especially computerized physician order entry, and at the time there was no real good commercial system that you could purchase that would do CPOE with clinical decision support and so they had made a decision, as had many other kind of leading hospitals, to build their own electronic health record software. So that was really a fascinating time because we had sort of this notion of physicians and nurses and pharmacists and informaticians who were programmers, you know, you would go to your shift in the ED and then you would pull up the source code for the EHR and make some changes to it, and there was a lot that was good about that, right? If you have the source code to the EHR you can do whatever you want, you can make any change you want. You're not, your kind of horizons are just limited by your imagination and it was, it was great. It was an exciting time. We could, you know, build new modalities of decision support. It's also challenging, right? Because anything new we did, we had to program, you know, we didn't just have the ability to kind of configure a new order set. We had to sort of pull up our source code editor and make those changes and it really worked well for some time. But I think, you know, over time, the commercial EHR vendors were building better products. But the thing that probably really kind of tipped us over the edge, and this happened a number of places, was Meaningful Use. So suddenly the federal government was giving financial incentives for using an EHR and that EHR had to be certified and so we did get our EHR at the Brigham certified, and we did at Vanderbilt here where I work now too, but increasingly almost all of our development effort was focused on meeting certification and regulatory requirements and we could do less and less innovation. And so the problem was if there was a new round of certification requirements, we had to build all of those just for the benefit of one hospital. Whereas a place like Epic, you know, has several hundred customers, and they could sort of spread the cost of these certification requirements over many different customers. So, kind of the writing was on the wall. It was becoming increasingly clear that we were going to need to transition from a self-developed EHR to a commercial EHR. And over the past decade, so many of the leading institutions, you know, Columbia, and Partners, so Mass General and the Brigham, The Beth Israel was in the process of a transition, Vanderbilt made a transition from a self-built EHR to Epic, the VA is trying to move with some challenges, as you guys probably know, from their self-built CPRS system to a commercial system. And there are downsides. You know, I think now we need to sort of work in some ways within the framework, we use Epic here, that Epic has given us. But I've been impressed, I had, I was pessimistic about how much we would be able to configure or customize Epic, and I actually was amazed to discover that we have a lot of control over the system, right? We can build things using Epic’s tools. We can create extensions in code. We can even integrate external applications using web services and smart on FHIR. So, It's been good.

Dr. Jerome Pagani: Adam, do you have some overarching principles for how people and technology should work together in a way that supports that interaction?

Dr. Adam Wright: Yeah that's a great question and you know sometimes people ask are you just a medical computer scientist and I say no I’m a biomedical informatician and actually think that one of the hallmarks of being an informatician is sort of appreciating kind of the people process and technology intersections that really make these things work, and I've gotten better at this over the course of my career. I remember early in my career I would develop these software tools, this is brilliant, I've applied mathematics and computer science, I've developed the perfect software, then I would sit in my office and no one would use it or they wouldn't use it the way that that I thought they were going to and I would eventually kind of got so fed up with it that I walked over to the hospital to try to pick a fight and see if I could figure out why they weren’t using my software, and there were people there, they were sick. They were bleeding. They were having surgery like all kinds of wild stuff was going on in the hospital and somehow that occasionally took precedence over using this brilliant software tool that I had developed and so the more I learned that, the more I realized that I always did better if I spent more time in the hospital, more time talking to doctors and nurses and understanding what they did and then trying to kind of fit a technology solution into their processes rather than kind of just having my own theory of what their process ought to be. I'll tell you one specific story. We had built this tool to help people prescribe inhalers to kids with asthma and it was so cool. It was really accurate. We validated it. It was really evidence-based and up to the latest guidelines. And I discovered that people were not prescribing the inhalers I thought that they should use. And so I went over to the clinic and I realized that this doctor, she saw so many patients she basically had this like entire hallway of exam rooms and she would basically walk from kid to kid and like take away one inhaler, give them another inhaler, kind of switch things up. Never logged into the computer during the day. She would just write this like one- or two-letter code to herself with what she did to the kid and then at the end of the day she would pull up her EHR and would document all the changes she had made to kids inhalers over the course of the day and then my system would pop up all these helpful suggestions like did you consider Albuterol, what about a steroid inhaler and it was like, these kids were long gone man. They had gotten a new box of inhalers and they had been home for hours before she even logged into the computer, so it was my own hubris that made me think that she was going to use the computer in the exam room. No no, not at all. And so once I figured that out, I realized that this needed a completely different approach to providing inhaler-related decision support.

Dr. Craig Joseph: Yeah, it is unfortunate when clinicians don't do what you think that they're going to do, and I've certainly been in those shoes having designed amazing workflows and then finding ways that people bypass those workflows. It's sad, but it does tell you where you stand. So, clearly, observation meeting with end users with the clinicians who are at the point of care is important.

And I would presume that when you do that, they tell you things like hey this would be so much better if this weren't in alphabetical order but in order of most frequent things at the top, or this color is actually wrong. It doesn't attract my attention and it would be great if it. And sometimes I think those are key insights, oftentimes they're key insights, but sometimes they violate, you know, principles of design that you're trying to follow and what they're really telling you is, hey, this would work for me much better, but that person may not represent everyone and so how do you deal with those kinds of situations?

Dr. Adam Wright: So I think it's a great question, right? I think that some of us have this tendency to feel like, oh, there was a doctor involved in the design of this, therefore it'll be like clinically accurate and relevant and often that doctor is, you know, they didn't do residency or they're, you know, a left ankle specialist or something, and this is a primary care tool. So, I have found that involving multiple people is valuable and involving people that are actually doing the work is really valuable. We recently worked through a VTE prophylaxis or DVT prophylaxis tool that we had been developing over time and we had this sort of august group of hematologists and hospital medicine leadership and people from our quality and safety department. But what I realized, like, partway through the discussion was I actually said, like, has anyone here ever admitted a patient using Epic? Because this stuff shows up at the time of admission, and the answer was no, right? Who does that? The intern does that, maybe the nocturnist or the hospitalist does that. And so, what I found is that involving multiple people and involving, you know, people that are actually doing that work is really really important. I think though you need to be careful and this is I think part of what you're getting at, avoiding design by committee, right? You don't need to pull those people together and have them like, let's just use a whiteboard and just have them, you know, design, you know, the system by themselves. What you really want to do is kind of elicit their requirements and then apply some user design principles to design one or more prototypes and then show those prototypes, get feedback on those prototypes. We do a lot of what I think people call hallway usability testing, right? I'll mock up three different versions of a decision support tool. And I'll just get whoever is like at work today. I'll yell down the hall. It's like, can somebody come look at this and give me some feedback? I have found that people are much more likely to respond well to prototypes than they are to respond to, you know, a call to design the things themselves. I think it was Henry Ford had this story where he built the model T and he said, “If I'd asked people what they wanted, they would have said, like, you know, a faster horse.” But he sort of had this insight that there was some alternative there.

Dr. Craig Joseph: Yeah, that's great and there, there are, you know, times where you do need a faster horse but there are other times where you need, you know, visionaries like Henry Ford or Steve Jobs to tell you what you want and then potentially to iterate, right? Because even though those folks were geniuses, there were times where they created tools that just didn't work for the masses and you had to kind of go back and I think that's one of the key things that I'm hearing from you and others who are involved in these kinds of decisions, like, there's no one answer. There's no one rule that fits them all.

Dr. Adam Wright: I agree I mean I thought mightily I hung into my Blackberry forever I said I'm always going to have a physical keyboard on my phone. I thought Steve Jobs was nuts to have a soft keyboard. He was right. But it took me a long time to figure that out and if he had just had, if I had voted, you know, there would be an iPhone with a keyboard on it.

Dr. Jerome Pagani: So, you mentioned a little bit about how the EHR has changed over time and how your own approach has changed. What did clinical decision support used to look like?

Dr. Adam Wright: Yeah, I mean I think it's evolved a lot, right? So some of the earliest decision support we had was table-based. So we would purchase a list of drug interactions from First Databank and we would sort of implement that and I would say, you know, why it was that the first thing we did, it's because it was the first place where there was a good knowledge base. You know, there wasn't reliable knowledge base of all the facts of primary care, how to manage a surgical patient, but there was this database you could purchase of drug doses and drug interactions and so I think that's where we started. And I think we needed that. I mean obviously preventing, you know, harm from drug interactions or drug allergies is important and valuable. But I would say that, you know, over the last few years, there's obviously increasing interest in using machine learning in decision support. So, we have a lot of students especially that are trying to build predictive models and integrate those into decision support. I can tell you though that we have I think about 840 decision support tools here at Vanderbilt, and I think probably less than a dozen of them use machine learning. The rest of them simple logic, decision trees or guidelines or flow charts. So, I wouldn't count out boolean logic in the realm of decision support. I will say, we have gotten a little bit smarter about workflow. I have increasingly found that using defaults and intelligent presentation of information can be as or more effective than interruptive alerts. So, I think we kind of got a little carried away with interruptive alerts at some point, and this is not a new idea, if you look back to the 10 commandments of decision support from David Bates and others, they were telling us to avoid stopping and changing directions. But I have personally found that the kind of CDS that is focused on providing the right information at the right time and almost making it seamless to do the right thing has been more effective. I would say if you look at the future, I think we're increasingly seeing externalized decision support tools. So, there are standards like CDS hooks that let you build CDS outside of the EHR and integrate it that let you integrate more complicated user experiences or more advanced models. I think that'll be a big focus. For me, the other thing that's been really important is just increasing the voice of the user. So, I have really tried to focus in my career on user feedback and understanding what's working well for people, better monitoring systems to make sure our CDS is achieving its goals. So, I think that that's the direction we're headed.

Dr. Craig Joseph: I love that conversation about non-interruptive alerts a study that, just, I can't get rid of was, and I'm not sure if you're familiar with this firm, Children's Hospital of Philadelphia, they were trying to decrease the use of Albuterol for patients with Bronchiolitis and so they did what I would have done which is remove Albuterol from the list of options in the, in the order set and so when you opened up the order set for Bronchiolitis in the emergency department, you didn't see Albuterol and you know you could still order it but you had to slide down to the bottom of the orders set put it in a box and search for it and it was a little bit, a little bit onerous, right? It was a little bit difficult to not do the right thing and what they found was people stopped using the order set.

Dr. Adam Wright: Yikes.

Dr. Craig Joseph: Right? Because folks said, well this older set's clearly broken, and there's no easy way for me to give feedback and so I'm just going to just type all the stuff from that box, and, and the way they fix this problem was by putting the Albuterol, again, the medicine they don't want you to prescribe routinely, back in the order set with a little explainer, a little text box right above, saying hey you probably shouldn't use this, research has shown that it really isn't effective and if you're gonna use it, we would really appreciate you ordering a pre- and post-pulse ox so that we can help you think about evaluating its efficacy, right? And they achieved the same results of significantly decreased Albuterol use by kind of pushing a non-inter- to me, that was a non-interruptive clinical decision support. It's just right there for you to see it. You can still do the thing. But, so you know, sometimes the logic at least in my mind gets turned upside down.

Dr. Adam Wright: Yeah I agree. There's a subtle thing that I don't think people always think through is there are some kinds of decision support where I'm helping you remember to do something you want to do. So, you know, you've decided that you want to give pneumococcal vaccination when it's appropriate and just hard to remember who's eligible versus I'm using CDS to change your practice. You want to prescribe albuterol, you think it's great and I'm trying to, against your will, kind of force you to not do it by harassing you or making a barrier or make ir hard to do. And I really think the way you approach those two things is really important. I think you said probably the most important thing which is CDS is not the primary tool for getting you to do something you don't want to do, right? Education like that explainer you put, having a, you know, academic detailing or grand rounds where we talk about why we don't use albuterol, having the department chair give an explanation of why he'd prefer you don't use albuterol with bronchitis as much. Start with that and then build in some decision support to push people in that direction. I'll tell you a slightly more cheerful story about modifying order sets. So, this is work that David Rubins led at the Brigham. We were, noticed that we were ordering much more telemetry than we wanted to order and much more telemetry than other hospitals ordered, and we discovered that the residents had created like a sort of customized order set that they had saved as kind of the residency admission order set and had telemetry checked off by default and so, the only small tweak we made was we left telemetry on the order set but we took the checkbox away so that it was not ordered by default, you had to click something to order it and a lot of residents actually said like, “I thought I was supposed to order telemetry for all the patients, that there was a standard of care at the Brigham was every patient needed telemetry,” and that was not the standard of care and not the goal and so we had a huge decrease in the amount of telemetry that was being ordered to the point we were actually nervous and so, we looked to see if there were more kind of out of ICU cardiac arrests or if people were then ordering telemetry on the second day or something like that and none of that was happening, people were doing a good job of ordering it when they needed to. I do think that people tend to look at these order sets sometimes as kind of normative, right? Like, if it's checked, that means I'm expected to do it and if it's not checked, I'm not supposed to do it. So I think adding inline explanations of when you should or shouldn't do things is really helpful. I also find that people are really frustrated. You know, one thing that I've seen before is something says like, “If the patient, you know, over 65 then order this if they're under 65 in order that,” but it's just text, right? We should also then use some logic to check the thing that makes sense for the patient and then if you think they're an especially spry 66-year-old or especially sort of dried out looking 50-year-old you could switch it, but default they get it the right way always makes people happier.

Dr. Jerome Pagani: So, I think you said something really fascinating there that I just want to touch on for a second, which is that choice architecture, which is one of the tools that we use to help engage people and encourage positive sorts of behaviors or discouraged behaviors you don't want to see, is great, but it needs, really needs to be there combined with some kind of stakeholder engagement and education.

Dr. Adam Wright: Absolutely. Yeah, I think some of the most interesting work I've seen on this is from Jeff Linder who studies respiratory infections. You guys are probably familiar with his work, but he gave a talk that I thought was so interesting. It was this, from behavioral economics, if you take a wine shop and you tell people that you organize it so you've got the red wines on the right side and the white wines on the left side and you tell people to pick two wines, they'll pick one red and one white. But if you put the international wines on one side, the domestic wines on the other side, you tell people pick two wines, they'll get one international, one domestic. So he knew that people wanted to quote-unquote help patients that had upper respiratory infections and the way they helped them was by prescribing inappropriate antibiotics and so he created this order set that had so many medicines, right? Here's all your antitussive choices, here is all of your decongestant choices. Here's all of your, you know, head pain choices, and here's all of your expectorant choices and then you can really just load up the card you go through and click this and this and this and this I'd give you so many prescriptions I'm really taking your cold seriously or something. But what I didn't do was prescribe you an antibiotic, and I thought it was just a beautiful example of choice architecture.

Dr. Craig Joseph: I'm trying to think of something witty to say and I've lost on that.

Dr. Jerome Pagani: We might be here a while. 

Craig Joseph: Yeah, sorry about that. One thing that I hear a lot of people discussing when it comes to design and clinical decision support is the idea that you're never really done. And I think a lot of healthcare systems and hospitals kind of have that attitude of they get all those people in a room that you described earlier, all the smart people some of whom actually write orders in the electronic health record and take care of patients. And they come up with the plan and they institute the plan and they watch it work for six months based on criteria that they've developed before they instituted the plan and it's terrific.

Dr. Adam Wright: Yep.

Dr. Craig Joseph: And they think their job is done and the department chair says move on, move on to the next thing, and sometimes that's a problem.

Dr. Adam Wright: I think that’s a huge problem. So I’ll tell you, we’ll be vague here, but I've worked at a large health system, I'm familiar with a large health system that had a policy that all the decision support had to be reviewed once a year and the CDS team approached a leader in that hospital and said we're too busy, we can't review the CDS that often, we want to postpone or possibly switch to a two-year review period and so that leader said, you know, well, are we still building new CDS? And they said yeah, we have time for that, we just don’t have time to maintain the existing CDS and so, that leader's response was no no, we don't have time to build new CDS if we don't have time to maintain the existing CDS. One of the biggest predictors whether people are willing to accept new CDS is the experience they've had with old CDS. If you sort of let your old decision support rot, then your new CDS isn't going to work because people sort of become just almost conditioned when they see a new pop-up or a new alert to just override it immediately because they know it's garbage. So, some things that I found that work well are monitoring, so having tools to look at how much your decision support fires over time and how much it's accepted over time if you have an alert that normally fires 20 times a day and now it fires 2000 times a day or it fired 0 times a day for the last three days, something might be wrong and it's worth looking at that, and I actually got a grant at one point from the National Library of Medicine to do some of these monitoring things. But surprisingly, the most powerful tool that I found was actually user feedback. So it turns out that users want to tell us when CDS is working poorly and even sometimes when it’s working well for them. So here at Vanderbilt, and also at partners in Boston and other places, we have put these little smiley faces in the corner of all of our decision support tools, a frowny face, a smiley face, and sort of a medium in-between face, and you can kind of basically upvote or downvote the CDS and leave comments. The thing that blows people's minds is, you know, I try to write back to their comments within a few hours and so I got one over the weekend where we had this genomic decision support tool for APOL1 and it suggested urine microalbumin screen and the patient already had urine microalbumin screen and I looked at the chart and this doctor was right, we had made error in the decision support and so I wrote back to her within two hours and said I'm sorry, we have a new urine microalbumin test, it's not being properly picked up by this alert, you were right, the alert was wrong, I fixed the alert and please keep the feedback coming. And so the thing is now, she, but first off I solved her problem and she told me about a problem, but I bet she'll be more likely to send me feedback in the future. Maybe a little more likely to read the alerts. So when I send them, and it's also just humanizing about it. I'm always amazed to learn that a lot of our users don't even realize that there are people at Vanderbilt that work on the HER, they think that we bought Epic from Madison, Wisconsin and it just it is what it is and that there are 600 people at Vanderbilt working on it, it just blows their mind, and so, you know, people occasionally are grumbly at feedback, like, I hate this, you're so so dumb, like, you know, may say a swear word or something, and then I write back to them like oh, hi, sounds like you're really frustrated, and they're very apologetic but that's how you kind of win hearts and minds I think is by just responding to users. So, I think the secret weapon to making your CDS better is user feedback. But I'm also a big believer in kind of reviewing guidelines over time and having good monitoring tools as well.

Dr. Craig Joseph: Yeah, so I am going to be talking with your undergraduate institution to see if we can get that math degree taken away because what I'm hearing is that humans are important and it's not just about the programming. The code can be great. But if you're not getting that feedback and then, and then acting on that feedback in a timely fashion, if you really want to change hearts and minds and often we do, that's the way you do it and you don't do it by developing a new module that can do something that people maybe haven't even asked for yet.

Dr. Adam Wright: Well don't worry because I am a sort of unfeeling automaton, I actually used mathematics to build what we call a sentiment analyzer, so it reads the comments and it finds the especially grumpy comments and highlights them in yellow on the report. So I use mathematics to figure out what the users were really feeling. So I think I should be able to keep my degree.

Dr. Craig Joseph: I am so glad you said that because I was halfway through the email.

Dr. Adam Wright: Okay, good.

Dr. Craig Joseph: So I'm going to delete that email now.

Dr. Adam Wright: Excellent.

Dr. Craig Joseph: Because I, I, yeah, I do have powers and I didn't, I didn't want to have to use them against you but …

Dr. Adam Wright: Yeah, I understand.

Dr. Jerome Pagani: So Adam, how do you know when something you've created is ready for primetime, and what does primetime look like, do you start at scale? Do you start with a pilot? You sort of touched on that a little bit earlier but if we could kind of come back to that, and then, when do you know when it's time to iterate and whether you're going iterate on something or just sunset it?

Dr. Adam Wright: Yeah, so my own philosophy is maybe a little bit controversial but I'm a big believer in getting things out as early as possible and so, believe it or not, our CDS committee cares a lot about, you know, alert burden and reducing excessive firing of alerts and stuff like that. But I actually have a pretty low bar for putting out new decision support as long as we convince ourselves, number one, that the thing is sort of clinically appropriate as long as we have some sponsorship from the people that are actually going to experience it, right? So, the classic example I use is when anesthesia comes and says we want surgeons to do this thing, you know, can you put something new like a hard stop to make sure the surgeons fill out this form before we get to the operating room or something, we send them back and we say go find some surgeon,s get the head of surgery to come and co-present this with you, and then we'll turn it on for you, so we need some kind of clinical plausibility, some agreement or kind of consent from the alerted or the supported in the decision support context and then the third and most important thing is a really clear evaluation plan. So, I say we're going to come up with this metric. We're going to say that we're going to improve this outcome by at least 20%, or we're going to improve documentation by 45%. The acceptance rate alert's got to be at least 50%. So we come up with some actual measurements and some, you know, agreed benchmarks for those and then, wherever possible, and I don't always win this, I prefer to do new decision support in a randomized trial, and so that can start small, right? You know, we sort of do a stepped wedge thing where we just pick one pod of one clinic and turn it on for them and sort of slowly work our way up, but I would actually say that I have a low bar for turning things on, but I have a high bar for evaluating them and then keeping them on. Our committee isn't great at picking which proposed CDS tool is actually going to work and which isn't, but what we can do well is look at the data and then turn things off and I would say this has worked really well for us with one exception, which is sometimes things get this magical moniker called regulatory attached to them like, it's a regulatory issue that we pull Foley catheters. It's a regulatory issue that we go see patients that are on restraints and it is a regulatory. Don't go me wrong, but we have used that as an excuse sometimes to keep decision support on that has poor evidence for its effectiveness or even strong evidence for its ineffectiveness and so, that's the biggest sort of boogeyman that I've been trying to get rid of is this, this is a regulatory thing, I think if we really care about pulling Foley catheters on time and we have an alert that we know doesn't work, we're lying to ourselves, right? We've done something, what we should do is turn the lever off and say we have nothing, y'all are on your own, and then we could sort of start from the drawing board and come back and build something more effective, but that's my take, is start fast. Evaluate quickly. We sometimes iterate like multiple times in the same week. You know, if an alert fires often enough, we can get feedback. We just had an alert for atrial fibrillation that we turned off and on twice in basically a week-and-a-half-long period as we made little refinements to it and saw what was working and what wasn't working. So that's, that's my philosophy is I think we should turn them on fast.

Dr. Jerome Pagani: That's great.

Dr. Craig Joseph: How do you sustain projects once the excitement's over and, and I'm going to reference something that you just said which to me seems like a brilliant idea. So you, you have CDS that's out there. You give people the ability to give you smiley faces or thumbs up, thumbs down, and to write some some free text and I think in the, in the literature that's been known as cranky comments and …

Dr. Adam Wright: Yes, sir.

Dr. Craig Joseph: You, you said that you used math, and this is why I'm going to allow you to keep your bachelor's degree. You use math to identify those cranky comments to call them out and to make it easier and I would argue maybe even exciting to get that weekly or monthly report for whoever's looking at it, and that kind of keeps you going because yeah, all the stuff that I don't really care about is not highlighted and some of the things I really should care about is highlighted. Are there, are there other tools or ideas that you have to kind of keep people you know focused on the important things that they were focused on six months ago?

 Dr. Adam Wright: Yeah, this is a really interesting question, right? So we get a lot. This is a training and teaching hospital. We get a lot of excited trainees and fellows that come through with a specific idea and they're so passionate about it. They build it really diligently and they, it's great. And then they get people to use it and then they graduate or they move on to something else or start, start another project, and so I think that energy is incredibly important. We try to capture and harness that energy and point it in the right direction whenever we can, but there is also kind of a almost sort of slow going kind of diligence that's required, you know, to work on the other 840 alerts we have and make sure they're working in our 2000 order sets, to make sure that they're good, and so I have found honestly that there are just some people who have the temperament for it. You know Atul Gawande wrote this paper about primary care, right? He's a surgeon and he was talking about the kind of power and beauty of going and, and cutting out someone's gland and seeing their disease get better, and then he talked to his friend Asaf Bitton, who's a primary care doctor, and he's sort of slowly chipping away at people's LDL and hypertension and get them to quit smoking and you know, who's really saving the patient right is the surgeon does this kind of flashy thing or is it the person that meets with the patient 12 times over five years and gets them to quit smoking and start taking a diapertensive? So I think there are some people who are cut out to be primary care doctors and some people who are cut out to be surgeons, and I think I have the same thing on my CDS team. I have some people, talking to one of my most diligent colleagues and he said like, look, you know, over the last two years I've improved the acceptance rate for the medication alerts 1% 30 times, and so none of those was exciting o,r you know, really that, that impactful, but when he just used this approach to knock chip off, another percent off, now he got to 30%, and it's, it's uncommon that I could make a single change to a decision support tool that improves at 30%, so I think you need both kinds of people and I actually think that the people who are doing the slow work really benefit from feedback and, and data, right? So I make a point once a year, we look at our firing rates, our acceptance rates, and I just say look at what we've accomplished. It was slow, but we sort of knocked these things off, and now we're 20% better than we were a year ago, and you know, I focused a lot on the cranky comments but believe it or not people actually click on decision support alerts and say, “Thank you, this helped me, I hadn't thought of this, this was well-timed,” and so we make sure that we get those right to the people that have been involved in the work and I also find, you know, as people have been working remotely or working from home or something, some people have almost lost their connection to the clinicians and so, we try to bring our team in as much as we can to around with people. We talk to users, and honestly just hearing a user say something like, you know, I feel like Epic's a little less annoying now than it was a year ago, that's actually, that creates some excitement and energy to keep things going. So I think that that's, that to me has been the key, is just being able to see process, right? You're not Sisyphus. You know, it's not like, just keeps rolling down the hill, right? You are slowly getting the rock up to the top but just taking you a little while.

Dr. Jerome Pagani: That was great. Adam thanks so much. One question we like to ask everybody towards the end is to just have a think about three things that you interact with on a regular basis that are so well-de designed that they bring you joy, and they could be things outside of healthcare.

Dr. Adam Wright: Absolutely, okay, so the first one is actually one that I interact with primarily with my kids. I have three kids and one more on the way and we bought this thing, I thought it was the dumbest thing I've ever purchased. It's called a nugget. It's a play couch and it's basically a couple pieces of rectangular foam and a couple pieces of triangular foam. You can build a couch out of it and my kids have so many toys, there are electronic toys and digital toys and stuff, but they just scream and fight with each other over who could, like, take the nugget and build a fort out of it or something. I don't know how the nugget people figured it out, but it's just the perfect size and the perfect number of pieces. We just bought another nugget because there was so much fighting, so when the kids get really aggressive, you know, Grace gets the red nugget and Isaac gets the brown nugget and they play alone, but they have built forts. They've built towers. They built a ladder. Isaac built a booby trap this weekend. A podium and give speeches. They've made stages to play on. It's just genius. It's like literally, it's like $300, ridiculously expensive for a piece of foam with some fabric on it. But it's my kid's favorite toy and I should just throw their other toys away and get it so whoever designed that is a genius. The second one, you know, I live in the south so, I will say is the Chick-fil-a mobile app. So people here really like chick-fil-a and so many restaurant apps like kind of break down quickly, right? So, like, if you want to get a free glass of water or if you want to order on your phone but dine in or if you want like, you know, no ketchup on your hamburger, they just don't work or something and Chick-fil-a’s mobile app is beautiful. Everything works correctly, it handles exceptions well, it's pleasant to use, the other day I got an alert that, you know, I had placed an order and I had actually accumulated enough rewards that they could make you know some you know this broccoli side dish I ordered free and so, it didn't have to proactively tell me that I probably would’ve just let my reward expire because I wasn't paying attention but they, I felt like they were helping me with that alerts. I think that that's a beautiful thing. And then the third one, I feel like these examples I give you you sound a little bit sort of like a man of leisure on a play couch eating Chick-fil-a or whatever, but it's actually my hammock. So I have this Brazilian hammock that I got and I don't know how they made it but it's just sort of, it's so comfortable, you sit in it. You just kind of feel like you're floating in the air. It's exactly the right length. It's just comfortable. It's, you get in it. You don't even feel like it's there or you're kind of swinging in the breeze and it’s the one place I can sort of get in and just stop thinking about things and just relax or whatever. So I would say my nugget play couch, the Chick-fil-a mobile app, and my hammock. I should’ve picked like a fitness-oriented thing or something but those are my three.

Dr. Craig Joseph: Those are great and as the parent of four children I will counsel you later, we'll do that offline and I'll give you advice, mostly tax advice.

Dr. Craig Joseph: Well thank you so much. This was great. We really appreciate your learnings and your experience and I do want to officially apologize for threatening to have one of your undergraduate degrees removed, that was a mistake and I won't, I won’t repeat that.

Dr. Adam Wright: Thanks Craig, it was a privilege to be here I had a lot of fun, and I look forward to the next conversation.

Dr. Craig Joseph: Awesome! Thank you again.

Topics: featured, Healthcare, podcast

Module heading text

Get the highest quality chemistry and microbiology testing services aligned closely with current good manufacturing practices (CGMP) for all types of products across all phases of development.

Subscribe to receive blog updates