false
Catalog
77th ASSH Annual Meeting - Back to Basics: Practic ...
IC11: Patient-Reported Data: More Than Just Resear ...
IC11: Patient-Reported Data: More Than Just Research Outcomes (AM22)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
We'll get this thing started. So thanks for those who are here and for those who are participating in this. Two of our speakers do have to leave after their talk. So for both Drs. Franco and Kamal, if you're thinking about their stuff and have questions, shoot them in the Q&A thing as much as you can. I'll try to catch them before they leave. We'll do a little interrupted session before we lose them if that works for the flow, which I think should be fine. The whole idea of this session is really to talk about things related to patient-reported data that aren't the standardly discussed stuff or the things that we're reading about in papers and publications. Because I think going beyond what research has to offer is the next phase of all this stuff. And the people on this panel are all doing really valuable work in the space of patient-reported data collection that isn't just showing up as the research stuff. So this is a slide I actually stole from Chow because it's a really fun way to see the evolution of just the concept, patient-reported outcomes, patient-reported outcome measures over time and how impactful they are not only in what we publish and in research, but certainly now increasing in almost daily use in a lot of places. Conceptually, paper forms are becoming outdated, right? So PROMIS is a big initiative to technologify and simplify and make it more efficient to be collecting patient-reported outcomes and patient-reported data. And we've seen sort of mixed results and mixed impact to that, especially in hand surgery. But from an initiative perspective, it is leading the way in making paper forms as obsolete as possible and make this all as technology-driven as possible. Through PROMIS and other related technology initiatives, there have been improvements in the types of questionnaires, but also the question implementation across the questionnaire. So computerized adaptive testing and item response theory are sort of the buzzword terms that people are most familiar with. But the concept there is allowing each individual patient to hone in on their specific status within a series of questions based on essentially how they answer. So this says difficulty because sort of the GRE and some of those more well-known tests are based on the same thing. But conceptually for the patient, it's a level of challenge or a level of difficulty of the question for their daily use. And based on how they're answering, it hones in to try to get them close to their real score. So those are implemented in a lot of the computerized adaptive testing versions. There are now a lot of proprietary companies and a lot of technology options. A lot of the EMRs are now moving forward to get rid of legacy questionnaires. This is from a proprietary company that doesn't really matter which. The point is that this is becoming more user-friendly, more technology forward. You can do it on text. You can do it via web platforms. A lot of the EMRs are doing it now and driving it through the EMR itself. So this is all evolving in real time for us as providers. And so moving beyond using it for just the outcomes piece and instead starting to use it in care and different varieties and options at the bedside or quality or communicating with patients is a lot of what we're going to talk about today in this panel. So the other evolution we're seeing is in institutional databases and how because a lot of places are now more easily and freely and rapidly collecting data, the change from either relying on payer databases and now being able to rely on your own institutional databases where you can get information like transportation needs, household income, education level, things that you may not think of as a PRO, but they're definitely PRD, right, patient-reported data elements that massively impact utilization, cost, healthcare, workflow, outcomes, and understanding not only how to collect those data, but then how to use them in the future of care, especially in a practice like hand surgery where we know how so many of these things can impact outcomes, I think is the next phase of using these questionnaires and these data collection instruments. There's a lot of publications starting to make use of these approaches and using patient-reported data to understand the outcomes, right? So depressive symptoms or psychological factors, mental health scores, pain and self-reported pain catastrophizing, all of that gets discussed, but how it impacts the outcome or at least our reporting of outcome is really important and still not really understood as well as maybe we need. And then the concept of using pre-intervention data to either predict or anticipate post-intervention problems is a lot of the work that we've been doing in our group, but certainly others are doing it as well. And these, again, the collection of these patient-reported data, I think the term outcomes probably needs to go away and it's really about patient-reported data because preoperative is so important and then tracking them over time is so important. Also, a slide I stole from Chow because she makes really good talks, but conceptually, I want to take this from being research to something more than research. So instead of researching the outcomes piece, now using these data collection instruments for more active and engaging components of care and that's, I think, what we're all going to hear about today. So we have our four speakers. We'll start with Oren Franco talking about enhancing the patient experience and then followed by Rob Kamal, Jen Walch, and then Chow Long. Thank you. I think the two are the same. I think I just uploaded twice. Cool. We'll do the lower one because you know I don't have a good reason. Thank you. I love that last one. Our research shows that we need to implement our research because isn't that the truth? So I'm Oren Franco. I'm in the Bay Area in Northern California and I was, I think, an early adopter of PROs and I felt that there was a lot of opportunity, even through residency, to kind of automate the process of querying, you know, how patients are doing postoperatively. And I implemented a digital version of that in Fellowship, which became a company called Surgisurvey, which doesn't do as much PROs, but it does still do a little bit. And in the process of developing that, I got to see this data come through for about 25 surgeons and over 10,000 patients. And as a result, I've become a little bit of a PRO pessimist in the sense that when you're talking to surgeons every day and you're talking about how to implement this and you hear if they're interested or if they're not interested, you really identify the numerous barriers and problems to implementing these in practice. And that applies to big practices and small practices alike. So I have a small practice and I've been collecting my patient data with my system for six years now, actually seven if you count Fellowship, and it's in about five other practices around the country as well. So I'm going to share what we can generate with a very light, nimble, inexpensive system and how it benefits me and how it benefits my patients. So first, how it works, it's pretty simple. It's an iPad, like, you know, the $100 iPad that anyone can get at any store. It links to a HIPAA-compliant Google form. Again, this is Google like you're familiar with. And the form is kind of shown here on the right side, just very basic information. This does not integrate with the EMR, so I don't have data on patients' comorbidities and I don't have data on their age or birthday or sex unless we put it in. But we do get some general information about what type of surgery they're having, where they're having it, and some codes that go with it. And then this is kind of the automation scheme. So the enrollment shown up here is the active part that a human has to do. And everything in green is fully automated, so you probably can't read it. But basically it says that once a patient gets enrolled during a pre-op visit, they're going to get pre-op information emailed to them one day before their surgery. And then on the day of surgery, they're going to get an email with some information about how to manage their wound and pain control. And then they're going to get emails asking them to complete the QuickDash at 3 weeks, 6 weeks, 12 weeks, 24 weeks, and 52 weeks. And they will get reminders if they haven't completed it. And then kind of on the other end for me, it generates case logs, it generates data, and that's published online. So the enrollment itself is done by my MA. It's not done by me. This is kind of invisible to the surgeon. And we enroll all of my preoperative patients. It takes maybe 5 to 10 minutes. So first jumping to the end, the end result is this case log essentially of all of my patients, and in this case also my partner's patients, that simply tells me, you know, who the patient is, where I did the surgery, what the surgery was, and their codes, everything I just showed you, and calculates the QuickDash score. As a side note, this was extraordinarily useful for me during my board's collection, and I encourage all residents and fellows to use it because it was great. And then as I said, patients will get an email preoperatively, and we talk a lot about the pain management recommendations. We actually have a little video that they can watch from our website. I'm surprised at how many patients click that link. I feel like if I was a patient, I wouldn't. But they come in and say, oh, I saw that video. It really helped. And then there are links to a lot of these, by the way, are the ASSH kind of templated handouts that they create. So, you know, about anesthesia, pain management, splinting and cast care. And we say, you know, do not respond to this email. They're not really the best form of medical care, but call our office if you have questions. We have some just kind of info here that the open rate is extraordinarily high, and the click rate is pretty high, too. So more than a quarter of patients are actually clicking on one of these links. The other nice thing, just for me as an aside, is that I get an email at the end of every surgery day with the patient, the surgery, and the phone number. So I call my patients on my way home from work, which is easy, and I'm sure a lot of people do that, but it just saves me like 10 steps of having to go into the EMR and writing it down myself or even having my assistant do it. So it's nice. When this data exists, you can just automate these simple tasks. So in terms of the actual database, what kind of responses do we get? So to put it in context again, this is only email, not text, and there are up to three reminders at every time point. If they respond, then they no longer get a reminder. So you can see when this chart is shown and the data hasn't changed, there's about a 50% response rate at three weeks, about 43% at six weeks, 37, 33, and 34 there. So, you know, we collect about one year data, full data, on about a third of patients, which is pretty good considering that this requires zero human effort. Obviously, we could boost those numbers by putting in more effort and more cost. But this is what we get out of it. So we ask patients if they've returned to work. So these numbers or this chart is for all diagnoses, but what's great is obviously you can break this down for certain diagnoses, for age, gender, where they live, what kind of job they have, whatever data is in the system, you can always split the data that way. And I found a lot of this eye-opening. And, you know, when I said I'm a PRO pessimist, the truth is that you don't need every single hand surgeon to collect their data to see these results. I'm showing them right here. And I guarantee you these don't really change much around the country, which is the fact that actually a pretty reasonable percent of patients don't go back to work after hand surgery. And this is true for trigger finger. And this is true for carpal tunnel. And this is true for de Quervain's. Those procedures that we just kind of, we just consider them gimme's, right? It's like, oh, we'll just release your trigger finger. It's no problem. You'll be back to full function in three to six weeks or whatever you tell patients is simply not true because there is a real percentage of patients who don't go back to work. Now, granted, maybe they have other things going on and maybe they were going to retire anyway. But it's important to know that. And I didn't know that until we were collecting the data. And then we asked patient satisfaction. Satisfaction, I mean, there could be a whole ICL on what is considered patient satisfaction. But we use, you know, would you repeat surgery? Would you do it again? And, again, we were surprised at how many patients would not repeat surgery. So this is a paper that has been published now, which is the dissatisfaction after highly successful hand surgeries. The trigger fingers and de Quervain's and ganglions and carpal tunnels that we do every single day. And we tell patients that they're 99% successful. They're not. They're about 56 to 94% successful, depending on the procedure. Cupital tunnel releases, universally 50%. Now, I'm sure everyone's like, no, no, no. Trust me, more than 50% of my patients are happy after that surgery. You're wrong. 50% of patients would not repeat that surgery after one year. And that's 25 surgeons around the country all doing it different ways, different styles, but consistent results. But we don't know that because we don't have that data, but now we do. One of the ways this really helps me and my patients in the office, other than, you know, just telling you, you know, what I know and what I discuss with them, is showing them curves. So I create these recovery paths for patients because we all get the question, we all hit the question, what's the recovery time, as if it's a number. You know, you'll recover it on November 18th. Okay, great. We all know that's a lie. It's a curve. And I show them these curves, and I've kind of grouped it here as diagnoses, but I actually have them grouped as like fractures, arthritis, or soft tissue procedures. And so I show them, you know, at three weeks, you're going to have about 50% of the pain from the surgery. And at six weeks, it's going to be about 30% of the pain. And I point out that at six months, you're going to be nearly recovered, but you'll still get a little bit better for six more months after that. And that helps them understand. And then the green line, which is the LRTI, I make a special note to say this is the only procedure that I routinely do where you will be worse three weeks after surgery than you were before surgery. And that usually is also true at six weeks after surgery. And then you will get better. And the other thing I point out is notice that that green line never touches that dashed purple line. And that dashed purple line is kind of like the average normal for the quick dash in the population. And I say this is not a curative procedure. It's going to make you better, but it's not going to get you all the way there. And that really helps set expectations. This is data that we all know intuitively, but it's nice when you see it in a chart, which is that Work Comp doesn't do as well as commercial insurance. We know that. But now it's not opinion. Now it's not bias. Now it's not you just being a jerk, telling your Work Comp patient that they're going to have a longer recovery than your other patients. I just point to the chart and just say, I can't explain this because I can't, I guess. I don't know why this is true, but it's true that you're going to do worse than if this wasn't under Work Comp. And I have the data to show it. So the red bars are Work Comp, and the blue bars are the commercial patients. They follow the same trajectory, but they just do worse. So all of this data is on my website. And if you go to my website and you click on information, you can see my data real time. You can see how many of these cases I'm doing, and you can see the grouped results for return to work and satisfaction and pain and all those things. So I find it very useful for me, for my patients, for preoperative counseling, and for getting appropriate expectations. One of the really cool things that's come out of this that maybe we'll hear more about is using the QuickDash as a predictor of patients who are not going to do well. So this is a chart of patient satisfaction as noted, or as defined by, would you repeat surgery at one year? So the color is would you repeat surgery at one year? So if your answer was yes, definitely, that's the dark green chart, or the dark green line. And the line is showing their QuickDash scores throughout the course of that year. So the people who would repeat surgery got better and better and better and better and better and better. And the people who would most likely repeat surgery kind of got better and better and better and better. But what you can see is the people who at one year would not repeat surgery look very different in that red line than the green lines. So you can tell at three weeks someone who's probably not going to be very happy at one year. So this is a paper we just submitted as well where we actually established that criteria. And so how great would it be if you got a notification that, you know, so-and-so patient that you operated on three weeks ago just completed their QuickDash. Their numbers aren't looking very good. You might want to get them back in the office. What I can't tell you is what intervention you can do to make them better, or if there's an intervention you can do. But this is step one. So the bottom line is that I think that there's value in this for any practice. It can be low cost. It can be completely automated, minimal enrollment time, and the results can auto-populate. Those charts make themselves in real time and update to my website in real time. I don't have to do anything. So my value in them is, of course, the setting pre-op expectations for recovery, post-op reassurances for patients who are recovering slowly, and say, Doc, it's been six weeks. Why does my thumb still hurt? And I pull up the chart, and I say, remember, I showed you this chart. I showed you it's still going to hurt, and they go, yeah, I remember now. And this applies to both patient and surgeon. I showed you the private versus work comp trajectories, the expected post-op pain control, the case log, which I find useful, and then the patient's phone number delivered to me on the day of surgery. So what are the challenges? I mean, you can guess them, right? Working with the system, getting your institution to buy in, getting your institution to agree to it, getting your MAs to do it, and empowering your staff to participate. It's so easy to conveniently forget these things, or the Wi-Fi was down, but you just got to stay on it. And then choosing a platform that serves all specialties, because there are a lot of choices out there, and some of them are perfect for upper extremity, but it's not going to help your spine surgeons. And the joint guys want to use only one particular system, but it doesn't work well for the sports guys. So that's a big challenge. But I think that's all I have to say. Thank you. Because we're going to lose you, I'm going to ask one quick question I have. Do you preempt for all patients with the conversation? Do you bring it up for all your patients? Is it part of your workflow? Or do you sort of keep the charts and the plots and the planning for that conversation when things are starting to either move towards a surgery, or early in the postoperative period to map it out? How has it become part of your workflow that you found that works the best? For me, it kind of depends on the patient. There are so many patients, as we all know, that you start pulling out charts, and they just glaze right over. They have no clue what's going on. So I don't pull it out for every patient. But when they start asking questions, how fast am I going to get back to work? What's the recovery time like? What's pain control going to be like? You know, you have that analytic engineering kind of patient. That's when I pull it up, yeah. All right, great. Awesome, thank you. All right, so an excellent view of using it at the bedside. And now another sort of version of that is the clinical decision support and shared decision-making components, which we're going to learn a whole lot about. Awesome. Well, thanks so much for having me. Thanks so much for putting this session on. So excited to talk about how we use these at Point of Care to follow Oren's great talk. So my first question, just to level set, is who is actually collecting patient-reported outcomes or data in clinic, in their clinic? Okay, so a couple. And then paper forms? Anybody using paper? Paper, yeah, I still use some paper. And then who brings them up during their visit? Is really, you do? Okay, cool. It'll be interesting to hear what you do. So what I'm going to talk about is some of our work in terms of trying to understand what to do with the data and how to use it and how much patients do or don't understand it. And I look forward to hearing what thoughts are. So my disclosure. So this is just background, as everybody knows, in terms of transitioning to quality measurement and payers, going from measuring easy stuff like structures and processes to more complex stuff like outcomes. You see all these cool dashboards, et cetera. And what we used to use as a radiograph, we're now using as PROM scores, like Oren showed, in terms of just how somebody feel like they're doing in terms of their improvement. This has informed a lot of how we treat patients. So here's a 70-year-old, three months out from CAS treatment of this dysphoridiasis. You could say that her x-rays look terrible. And her quick dash is, like, pretty good, right? This is paper, again. But you see her level of disability. Her motion's pretty good. This patient had bilateral wrist fractures. She had a really close family member to help take care of her. We put bilateral casts on her. It sounds terrible, but she wanted to do it, and she did great. And so the power of sort of understanding, one, the literature in this fracture pattern, and two, just having conversations with patients about what to expect. And also some of the social aspects of taking care of patients and how that can make a difference in how somebody does. So collecting PROMs and efficiency. So efficiency is the ratio of useful work performed and total energy expended, right? So how much sort of banged for the buck am I getting for it? And first, how am I going to use PROMs and what best supports that use? So if you already accept that, you know, this is data we should be collecting, then the next step is saying, well, what do I do with this information? And how do I use it to help take care of patients, right? That's an obvious next step. And I think there are a lot of assumptions in what we do with it now. And what I will do is sort of question some of that assumptions during my talk. And hopefully, it will sort of trigger some questions you'll ask and maybe some things that you'll try to figure out. So first, to collect PROMs efficiently, first you want to use PROMs that are meaningful to your patient. That's sort of an assumption. We use PROMs as communication aids. I think we all think that we are pretty good at that. And I'll show some data, at least, that suggests there's not really a consensus on which PROMs to collect and how you balance that with reporting with patient care. So the first, making PROMs patient-centered. So the assumption when you collect, let's say, a quick dash, is that the questions mean something to a patient. Now, most instruments go through this rigorous psychometric testing. It's oftentimes in English-speaking Americans. And you might find that there are populations where the questions don't mean much to you, right? So that's something you have to at least start asking. Can I collect and show a patient a graph? I think that's an assumption we make. And I'll show you some data that may make you question that. Improving on a PROM is reassuring to the patient. I don't really know if that's true, but we assume it. PROM reflects the patient's goals and values. We assume it, again. I think when we have a quick dash or something, we assume that that's testing something important for them. And the patient and surgeon agree that the PROM is the correct measurement stick to use for their care. Some assumptions. Here's just one study. And really, I just found it interesting because of the conclusion that I underlined here, which is that we should not infer that patient reported means that the information so obtained actually reflects patient's concerns. Right, so we see that when we have very specific activities that patients vocalize that they're trying to get back to. Right, so I need to get back to surfing. So it doesn't matter what the quick dash score says. If they're not surfing, they're not happy. And so they're sort of creating their own measurement stick for you, even though we're kind of applying these other instruments to measure that as we take care of them. Do PROMs reflect the patient's goals and values? It's a study we did where we just interviewed patients. We showed them the quick dash and the patient-specific functional scale, and we sort of asked them, you know, do these questions make sense to you, et cetera. And you can see some of the quotes here. Some, you know, some, maybe it's a little vague. Maybe it doesn't apply to my situation. It doesn't really measure my progress. And this, the bottom one is a pretty astute person that said, you know, one person's pain is not the same as somebody else's pain. So how do they compare? This is, again, a study we just compared. The patient-specific functional scale, that's where patients write down the functions that they're trying to get back to, like I want to surf, and this is my ability to surf, and you can measure that longitudinally. The therapy literature uses this a lot. And you can see, again, that at least to some degree, patients, that resonated with patients when they could just write out exactly what they wanted to follow. We then started asking, well, how do we, how do we use PROMs as we're measuring quality of care and during what aspect of the phase of care has it become important, and how do you sort of figure some of that out? And we asked patients. We said, well, you know, here's this instrument. Here's these questions. Like, when should we be collecting these, and how should these play a role when we take care of you? There were some disagreements on that, and there's plenty of disagreement on using standardized questionnaires versus specific ones, but generally, patients want to discuss their progress, and I think that doesn't take this study to tell you that. Like, that's the reason they come to see you is to talk about how they're doing, and is it normal when they're hitting X barrier, or is that abnormal? Like, is something wrong? Is there something wrong, right? That's oftentimes why patients see you. This is a great study. This is from the cancer literature that I love to cite because it's so well done. That questions not just patients' ability to read graphs, but our ability to read some of these outputs from the instruments that we collect. So essentially, they used three different types of line graphs, like a best, worst scaling, more, less, and normed. So the promise is like a normed, where 50 is the normal, and you're sort of above or below that, and they used those different types of instruments, and then they showed them to patients and clinicians and researchers, and you can see the percentage of right answers, at least the patients and clinicians got, based on the type of graph that were shown. So it's written as better, worse, more, less, and normed. So the normed one is the third number, and you can kind of see, you know, clinicians don't get it right all the time when they look at these and try to get a sense of how somebody's doing, and certainly patients don't get it right. So the idea that, you know, you can email this, or this is seen by a patient at home, and they can figure it out, is something you should at least question. We did this other study that we just submitted that was really sort of based on that cancer study. We said, well, how should we visually display these things, and we actually used some tables, and we used some graphs, and some, like some histograms and some line graphs, and we said, which ones do you prefer? Not a super rigorous study, but like starts the conversation, and patients preferred the bar graph and the table. We asked them like very simple questions that required some interpretation, and we normalized for their numeracy level, so like how well they can calculate numbers, and you can see, you know, about half the patients got questions right, but majority wanted something visual, and they wanted a discussion, right? So people are asking for you to interpret this for them most of the time and tell them what they perhaps can't understand from these graphs when they see you. We just started this RCT on sort of a more rigorous approach to this, where we have three different groups, and we show them either a bar graph, a line graph, or a table, and we ask them the same question. So we just started that, and we're trying to get a sense of if you study this more rigorously, does it bear out that one ends up being better than the other for patient interpretation by themselves without you there to help interpret it for them? So another assumption is PROMs can facilitate communication. I think they can, but I don't know if we've figured out exactly how to do that just yet. Here's another study, and here's the quote. The study identified goals most important to patients with low back pain. These were varied. Most did not correspond with current clinical measures. And so, again, highlighting that perhaps what we're measuring isn't always what that patient in front of you is measuring for themselves. We then sort of asked, well, we need to start figuring out how we talk about PROMs and how we make them relevant to patients when we talk about them. And so we did some interviews, and we sort of came up with this framework. And so if you start from the bottom at theme one, and then you work yourself up, you can sort of get a sense of what at least we concluded. So first, there was definitely variation in how patients believed PROMs were used, like what's the purpose of me filling out all these questions? And so one of the things to address that is just tell patients exactly what it is they're doing and why they're doing it. And that seems so straightforward, but we're all really busy, and our workflows are packed with stuff. And so getting somebody to have a script or something to do that takes a lot of work and effort. But taking time to repatient the responses. There's nothing that pisses a patient off more than 10 minutes of questionnaires and no discussion about what it was, which I'm guilty of a lot, because clinic gets busy, and this isn't something I've really worked into my workflow just yet. But patients, when they put the time into that, they kind of want to know what it means and what the results are, and that way you engage them in it. The second is that PROMs have the ability to reflect a patient's sense of self and their health experiences. So if the PROM resonates, if the questions resonate with what they're experiencing, then it makes a lot of sense to them. When the questions don't really align with what they're going through, then it doesn't really work for them that well. And then PROMs can be communicated and used to dynamically affect patient health. They liked it when you provided some benchmark. And this is like an objective way of when patients say, is this normal? And you're like, yes, this is normal. 90% of my patients at this time point, they can't pinch with their thumb yet. And they're like, okay, I feel. So this is kind of what they're asking for in terms of give me some reference point and where am I on that reference point. So just some guidance in terms of how we might start structuring the conversation of PROMs with patients. The next is there's always this assumption perhaps that PROMs equal goals, right? So the goal of a PROM might be that you go down a score for dysfunction, et cetera. And can those goals just be used as a PROM themselves? This was a RCT we did where we just asked patients about their goals and just the act of asking them about their goals increased their shared decision making. So we actually changed nothing about what we did in our clinic workflow, but gave them a little sheet of paper that made them write out what their patient goals are. And so based on this study in our new patient form now, the third question is name two goals for your visit today. And even if you don't look at them and talk about them, the fact of them filling it out to some degree engages them in the visit and they feel a little bit more involved in what they're doing. Another thing I think that we all kind of know but haven't measured is how much the demand side of the equation affects PROM scores, right? So we know that people that are really functionally demanding will have greater disability with let's say a wrist fracture, right? That's why you can't non-operatively treat a really bad wrist fracture in a 20 year old, but you can in an 80 year old, because of the demand side of that equation. So we're trying to measure that. So we just started this study where we're using this activity level scale just to get a sense of like what's your level of activity and what's your disability based on some standard of some condition you have. So I think at some point we'll be better at talking to patients more objectively about their functional demands and we can use that as a way to inform the discussion of, well you have a wrist fracture. Generally I talk to people like what do you do? Are you gardening and knitting? Are you surfing and playing tennis? And this is the things you might expect with each of the treatment options. We might get a little better and more structured at those conversations so that there might be less variation in how we decide to treat some of these things. We did ask the question on what PROMs, is there a consensus in what to collect, like what instruments to collect? And as everybody in this room knows, there's no consensus on what to collect. We asked a question about is there a consensus on how to talk about these things? So I introduced a lot of questions that I've had but we wanted to just see, well can we get people to agree upon like really important steps in talking about a PROM score in clinic and we got no consensus in this Delphi that we ran. So I think there's still a lot of opportunity here to try to figure out like what to do with these numbers and how to use them to help patients. This is a common quote here and generally it's the person in front of you and the better we get at using these types of instruments to reflect what that person in front of you is going through, I think the better we'll get at using that data and helping us take care of patients. So conclusions, use PROMs that are meaningful to your patients. I don't have an answer in terms of what that is but certainly it's something we have to start figuring out. I think we should use PROMs as communication tools but we have to figure out the best structure to do that and then incorporate PROMs that are meaningful to both you and the patients for longitudinal tracking. So thanks so much. So thanks so much. Rob's another person that we're losing Rob's another person that we're losing so if anyone has questions, there is that app function or you can just raise your hand sort of free flow in here. Rob, that was a great talk. I think if I may ask you a question even though I know you got to run. As you're running these, when you're running these RCTs, how hard is it to recruit patients for them? Where do you run into barriers on inclusion for some of these data points? Because that I always wonder about especially because what we're already looking at is so hard. You're talking about in clinic? Like how it works? Just all of it. The workflow? So patients get recruited before they see me not after they see me. Sure. Because otherwise they're out. Generally, it's a ton of just like cell. So it's like it's all in how you bring it. So we talk a lot amongst our team about how we make this sort of engaging for patients. So we're trying to study ways to better collect data that helps us take care of you and generally they'll do it but oftentimes it's pre-visit. Yeah, I think what's so incredible is that like knowing the biases and challenges of patient recruitment in general, I would have thought that you'd get like either, that you'd have almost a harder time finding what you're finding because those are the people who are agreeing to be part of studies. So you know that the ones who aren't agreeing have it way worse. And so it's like I feel like everything I just saw should be like cut in half for the real world as far as like how good it is and how hard. So that's really interesting. That's great. Any other questions before we lose Rob and Oren? All right, cool. We can keep going. Jen's gonna really talking about indicators of quality which is a topic that has been coming up for a very long time with regards to these data elements and is still a moving target. And so we're gonna learn as much as we can. Well, thank you. Thanks Dr. Hulity for the opportunity to be here. So I'm gonna extend this discussion. And this is really a moving target. And I will just, so I don't have any specific disclosures related to this except for research. And then probably my biggest disclosure is the fact that I practice in an academic institution in the Midwest. So I'm not a policymaker. I am not an administrator. And so my lens on these quality metrics really reflects that. But I think we all come to this conversation from different perspectives and so excited to hear what other people think about how these measures should or could be integrated into what we think about quality. So there's no doubt there's a keen attention paid to optimizing healthcare quality that permeates all of our discussions. We want to deliver the best care for patients in the most efficient way with the least amount of cost. And really the goals are to be able to distinguish the high performers, acknowledge and reward them, but also to understand where the low performers, where the opportunities that we can make changes in that. And then there's been more legislation in the last couple of years to extend that conversation even further to create more transparency in the system to allow patients as consumers to have these metrics inform their decisions about where they're gonna get care and who's gonna be delivering that care to them. So I think this is a really fascinating question in our country here, who decides what is high quality? And so in the United States there's a whole bunch of organizations, this is not a comprehensive list. And in some of these domains they overlap and some of them they don't. But they also clearly transmit to the information that our patients see every time they Google us. All of our patients Google us and many of the measures that are created here by these organizations then are translated in a five star rating that will come up quickly as your patients try to learn more about our practices. So I think this is incredibly important for us to pay attention to as providers and be advocates and voices in this space to really inform this conversation. And just to kind of set the stage on this, so this is what it looks like from a practical standpoint. I practice at the University of Michigan so this is very recent data that I pulled. We are 17th in the nation. Full disclosure again, I'm a plastic surgeon. They didn't have metrics on plastic surgery so I picked orthopedic surgery because it was the closest to hand surgery. We're 18th in the nation. And you can see here, we're ranked. We're not in the high performing category nor are we ineligible for scoring and we're not scored and not ranked. But literally this is what comes up. This is on our website. This is what our patients see. So where do these scores come from? And I apologize for the small font here but you can see about 50% of the scores are things that we would expect, clinical outcomes, 30 day survival, measures of utilization, how long people stay, whether or not they're discharged home, volume which often comes up in our conversations around quality. But increasingly now we're seeing measures of patient experience and I know we've talked about that a little bit in this session. And in some of my talk I will kind of intermingle patient reported outcomes and patient reported experience recognizing these are totally different constructs and used differently now in quality metrics which is another kind of interesting question. But arguably their representation is just as important if not more than the type of expertise we have in a hospital system and the types of services that we deliver. So I think this is increasingly coming in the conversation and I think is here to stay. And a lot of these again kind of overlap. So if you look at the hospital compare and now it's called the care compare system. This is again what it looks like for the University of Michigan. So this is driven by CMS. And the reason why this is important is because many private insurers look to CMS to set the standards for how they're gonna reimburse a hospital. And oftentimes that is driven by these metrics. So if the quality score goes up a little bit or it goes down a little bit, it changes the hospital's ability to negotiate and leverage for their reimbursement. So that certainly has a big impact on their bottom line. May impact whether or not somebody's in network which then might impact whether or not a patient is gonna come see us. And then ultimately I think from a public health standpoint it really matters. Are we directing patients to the places where they can best get care and are we identifying areas where we can improve our care? So again if measures of patient reported outcomes and experience are unfolded in this conversation, it's important to know where they are, what they're doing well, and where are the opportunities for improvement. So this is kind of the question I was charged with to answer today. So can PROs be used to measure quality? I think throughout the session we've unpacked that yes they should probably be within the conversation. Can they? I think they're arguably the most relevant outcomes to patients. They're probably the most direct link to patient-centered care. And they do provide an important complement to clinical and other healthcare utilization outcomes. And for all of us in practice, in hand surgery, if you think about this paradigm, this was developed by Drs. Berkmeyer and Dimmick, published in 2004, thinking about quality metrics either by the volume of care that you provide or the risk of care that you provide. We're up here in the patient-centered outcomes. We do a lot of hand surgery but fortunately the morbidity and mortality rates are quite low. So we're really relying on patient-centered outcomes to inform the effectiveness of the care that we deliver and then ultimately the quality of care that we deliver. But some might push back on this. Some say, well patients don't have formal medical training. Can they really assess quality? Is our outcomes the same as experience? I think they're quite different. Patients may be anecdotally happy despite adverse events. We'll talk about that in a couple of slides. Certainly there's effects from response rate and instrument choice. And there's a lot of opportunities to grow the evidence base, particularly around the reliability, validity and accuracy of these measures as they play out in real time, just like we do in research studies. I think there's a lot of room to understand where they can actually distinguish provider performance, how much of that is related to the care that I delivered versus the system that I delivered care in. And those are really important if we're thinking about tying them back to quality metrics and ultimately reimbursement. I think there's a lot of room to understand risk adjustment. We'll talk about that in a few slides. And then certainly the feasibility of trying to implement these in a clinic as we've heard from our other two panelists today. So trying to address that first question. So can patients discern hospital care? This was a study published in the New England Journal of Medicine a decade or so ago, which I think is really interesting that the HQA scores, which largely rely on clinical outcomes, when we look at the lowest to the highest quartile of that, they do align with the HCAHPS, again, a measure of patient reported experience, but they do distinguish the lowest and the highest quartiles of care. So perhaps they do align with current quality measures, but maybe they're also different in some ways too. So this is a study that our group did looking at about 9,000 patients undergoing general surgery procedures in our state. In the predicted probability that somebody would be satisfied, a report that they didn't have any regret differed based on if they had a complication or if they reported that their pain was well controlled or not. So I think there's unique dimensions about these outcomes that should be considered when we're thinking about them aligning with other measures of quality. This is another study by one of my colleagues Dr. Kyle Sheets looking at patient perspectives of care and surgical outcomes, again, in our state. And you can see here the HCAHPS scores didn't correlate with mortality and morbidity. So perhaps they really do represent this unique domain of quality. And this gets to the feasibility question. So this was published in Health Affairs in 2016. It's a really fascinating study. And I think important to reflect, we all experience this, no matter where you practice, that we spend a lot of effort. They quantified this as about 15 hours per week, 15 billion per year. That's an incredible amount of money on largely documenting quality measures. And it trickles down both for physicians and the entire team taking care of patients. So it's really important for us to know where do these provide value, where do they provide unique information, and where are they perhaps redundant and we should be using other things. So I think in the United States we're a bit nascent in this field. But if you look back at the NHS in the UK, they started collecting patient-reported outcomes for patients receiving care through this system for a variety of different procedures at the pre-op time and the six months post-op. And then they started looking at how might we link this to performance. And I think Dr. Long is going to get into this in her talk too, but just like you would imagine in a research environment, when you're thinking about them for quality, it matters. And it really matters then if you're thinking about linking it to payment. So patient-reported outcomes are more likely to not be collected from certain groups, oftentimes the groups that have the least access to care or are vulnerable socially or have other reasons to experience care differently. So this is important to think about. And then this is a really interesting study that took that one step further. They're like, okay, well, if we have everybody who completed things pre-op and everybody that had, who completed measures post-op, we are sort of able to understand who the high and the low performers are, but it was a little less precise. Again, in a research setting, we'll oftentimes try to impute data. Did that work? Well, yes, but it gave a different estimate, but a little bit more precise. But importantly, on the third point there, it does change the number of providers that would theoretically be eligible for a bonus, so 22% to 26%. So perhaps not a small jump, but if you're in that 4%, I think it would matter. And then thinking a little bit about how do we create benchmarks, what did these variables mean? So this is a study that we did looking at the proportion of variation across a hospital system. How much does it vary by patient-reported quality of life? And what other things explain what somebody might report with respect to their quality of life after a surgical procedure? And this example is bariatric surgery. So you can see here, you know, about 40% or so of the variation in quality of life after surgery remains unexplained, again, meaning that we need additional measures to complement patient-reported outcomes to really understand hospital performance along these dimensions. And these are things like socioeconomic status, the procedure they underwent, and comorbidity resolution. And this is taking this one step further, looking at, you know, yes, patient-reported outcomes vary across hospitals, so maybe they could indicate quality. But if you start risk-adjusting for the other comorbidities or risk factors that a patient might have, that variation shrinks. So understanding how we factor that into these scores I think is going to be really important, and we'll look at that in a couple of slides about how that might play out when we look at these measures in practice. So this has come even under more sharp focus with recent changes to payment models, specifically the MIPS program. And I think this was published in 2016 by Michael Porter. I think no discussion of this topic would be complete without talking about this work where, you know, prior to the MIPS program rolling out, there were all of these measures to try to understand quality, but only 139 of them were outcomes, and only 32 of them were patient-reported outcomes. So I think that's where we're seeing a huge growth in the expansion of the types of measures that we're capturing now for the purposes of quality assessment. So this is what the MIPS, or Merit-Based Incentive Pay Composite score looks like now. So again, linked to payments through CMS. Quality is a huge proportion of this, goes down a little bit over time. And so I think we're going to see over the next few years what this looks like. So again, patient-reported measures of experience and outcomes are in the quality bucket. And this is an interesting study that just came out looking at the association between patient social risk and physician performance scores in the first year of the MIPS program. So again, this is looking at the composite score, but I do think it's important to consider that patient-reported measures of experience are in this. And so for hospital systems where they're caring for patients with more social risk, the physician performance scores in that first year were a bit lower. It was kind of across the board in all of those categories, but again, something to think about that we need a lot of contextualization of these scores with where and how we practice. And this just came out in CORE earlier this year. Again, similarly showing this compared orthopedic surgeon performance in MIPS within the second year were a little bit lower than other surgeons, but particularly for surgeons that practiced in smaller groups and took care of patients that were higher risk. So what are the gaps? I think there's a lot to know about what measures we should be using, the reliability and validity of those measures, how accurate they are. Can we really use them to distinguish provider performance? And are we understanding what we think we're understanding from them? How do we integrate risk adjustment into this? And how feasible they are to complete on a population-based level. I would just say from some of the statewide work that we've done, very similar numbers to what Dr. Franco showed earlier, that it's about 30%. So again, something to think about with respect to the non-responders. What does the future look like or what do I hope that it looks like? I hope we get to evidence-based benchmarks that are nuanced to our practices. I think as Dr. Ghilardi was alluding to earlier, integrating data sources, patient-reported data comes from a number of different things, anything from wearables to the electronic health record to things that they can complete on a tablet even at home. So I think seeing those being integrated will be critically important in expanding the infrastructure around this. And then more broadly, again, we have a lot of groups that are engaged in quality work and so I think identifying emerging leadership in that space will be really important for us to come to consensus around what we should be measuring and how we should measure it. And I'll stop there. Thanks. All right. I think we have time for the full thing. So think about your questions. We'll keep going with the last talk. This is Chow Long. files. Let's go to the files. I see it. I don't know. Oh, come on now. Oh, all right. I'll just keep hitting buttons. All right. Awesome. Hi, everyone. My name is Chao Long. Thank you so much for having me here today. So our prior panelists discussed the wide spectrum of really powerful ways that patient-reported data can be used, but all of that is predicated on us being able to collect that data to begin with. So I'm going to be talking about patient-level barriers to providing patient-reported data. What are they and how do we address them? So we're going to start by talking about the patient-level barriers to collecting patient-reported data. And then we're going to take a deep dive on one of those specific barriers, low literacy. And we'll end by discussing one of the solutions that we've been developing, which is a multimedia patient-reported outcomes measure to address the low literacy problem. So as we heard earlier, there's a large repository of instruments that have been developed over the past two decades or so for collecting patient-reported data. And in orthopedic surgery alone, it's reported that there's 121 orthopedic-specific PROMs. And that study, they actually ran the search in 2014. So that was from eight years ago. So today in 2022, I suspect that there's far more than just 121. But the question is, are they being used outside of research? And as kind of we've been alluding to, it's really challenging to use it in clinical practice. And in the National Quality Forum, they also report that it's not really being used all that much. They say here, there are two major challenges to using patient-reported outcome measures and instruments for purposes of accountability and performance improvement. And the number one challenge that they list is that they are not in widespread use in clinical practice. So why not? What are the barriers to collecting and utilizing patient-reported data? We know that it's a highly complex process. We know that there are numerous stakeholders that are involved in this. And there are many frameworks out there for conceptualizing this, which is beyond the scope of this talk. But simplistically, there are four types of barriers, which some of the prior panelists already discussed. But there are clinic-level barriers, provider-level barriers, healthcare system barriers, and what I really want to focus on in this talk today, the patient-level barriers. When we looked in the literature, we were surprised to find that there was actually very little that was written about it. So we set out to do this study to identify what the patient-level barriers even are to completing PROMs. We had three phases for methodology. We did direct observation, and we also interviewed patients, caregivers, and clinic staff. And what we found was that there are primarily nine patient-level barriers to PROM completion. The first, and these are in no particular order, is platform design. So we found that some patients had a lot of difficulty navigating between PROMs or submitting their responses, for example. Print literacy, so patients' ability to read and understand the questions. Something that's related but distinct is health literacy, so patients' ability to understand health-related information on the PROMs. Technology literacy, so ability to use the devices that the PROMs are being registered on. Language proficiency, of course, if you're not proficient in English, they had challenges completing the PROMs. Physical functioning, that was pretty unique, I think, maybe for our clinic of hand patients, because we would see, for example, patients who were wearing a splint and they couldn't complete the PROMs themselves, so they had to rely on their family and friends who accompanied them to the clinic to submit the PROMs. Or if they didn't have anybody with them, we saw them balancing the instrument or the tablet, rather, on their laps, trying to use their non-injured hand to submit the PROMs. Or there were also instances in which staff had to complete the PROMs for the patients because they simply didn't have the physical ability to do so themselves. Vision, so a lot of patients preferred desktop computers or large tablets because they had difficulty seeing when it was just on their smartphones. And next is cognitive challenges, so patients who had dementia, for example, or learning disabilities had reported a lot of difficulty completing PROMs. And finally, there were some patients, when they were completing them in the clinic context, reported that they didn't have enough time to really complete the PROM the way that they wanted to. These nine patient-level barriers and facilitators, they can coexist, they can work synergistically, they can interact in different ways. And additionally, even though for this work we specifically looked at PROMs, I think that a lot of these findings are likely generalizable to other sorts of patient-reported data. So why is this important and why do we care? I think that there are two primary reasons. The first is from a researcher's standpoint is that when these factors aren't addressed, it can threaten the validity of patient-reported data. So missing data, as we discussed earlier for the non-responders, if they're not responding or if the form is incomplete, so some of their questions are blank, all of those things can introduce biases into our analyses and also into the conclusions that we're drawing. And it can also limit the generalizability of what we're reporting. And additionally, even when response rates are high, data validity depends on patients' ability to select responses that accurately reflect their experiences. And secondly, there are different populations that experience these barriers at disproportionate rates. So the barriers are not experienced uniformly by all patients. And as a result, the patient populations who are experiencing these barriers at higher rates, they have lower PROM completion rates. So for example, we know that there are lower PROM completion rates in patients who are older than 75, who are Hispanic or Black, or who have Medicare or Medicaid. And so this kind of gets, you know, is transitioning us to a brief talk about just health disparities as well. We know that there are inequities that are widening the health disparities in the United States, and COVID has only really highlighted some of those things. And although it's great that there's an increasing emphasis on patient-centered care, reliance on patient-reported data to understand outcomes and understand care quality, that risks becoming a systemic way of perpetuating and exacerbating disparities unless we are collecting and implementing this data in an equitable and inclusive manner. So this is a cartoon that I love because I think it does such a good job of showing the difference between equality and equity. I'm sure many of you have already seen it. And I just wanted to extend the analogy to patient-reported outcomes. So if we say that, you know, the baseball game is high-quality patient-centered care, and the fence represents, you know, those barriers to PROM completion that we were just discussing, well, then we really need to be designing PROMs that can be completed by all patients, regardless of their height, if you will, or regardless of their level of literacy, if we want to minimize those disparities. So now I want to do a deep dive on print literacy. We chose print literacy simply because of the magnitude of the problem. So the United States Department of Education reports that one-fifth of all adults are low literacy. And we know that low literacy is related to challenges with healthcare delivery. So as a result, PROMs are recommended to be written at lower reading levels. So the AMA and NIH, they recommend that they be written at the sixth or eighth grade reading level or below, respectively. But the question is, to what extent do PROMs actually comply with these recommendations? So this is one study that looked at the readability of commonly used PROMs in orthopedic surgery. And they looked at the readability level of 59 orthopedic PROMs. And as you move to the right, it is decreasing readability of the PROM. And essentially, they found that only 12% of orthopedic PROMs are written at the sixth grade level or below, which corresponds to what can be understood by the average adult. And they found only one instrument that was written at the fourth grade level or below. I wanted to just say a quick word about PROM administration as well. So interviewer, a lot of people ask, well, why don't you just have it be interviewer administered? Isn't that a great way to get around the low literacy problem? While I think that interviewer administration certainly is a great alternative, it is labor intensive and it is expensive. So it can be impractical and it can be cost prohibitive. So if we want to collect patient-reported data in a routine fashion in the clinic setting, then we need to have instruments that can be self-administered by patients of all literacy levels. One potential solution that we wanted to look at was multimedia PROMs. And we defined multimedia PROMs or MPROMs for short as PROMs that have audiovisual components so that they no longer rely on text to convey the meaning. This had never been done before. So we set out, we needed to explore whether this was even a good idea. And this was a roadmap for our exploration. The first step of that exploration is to develop a protocol for creating these multimedia PROMs because, again, to the best of our knowledge, these had never been created before. So we created the multimedia adaptation protocol. And this is a protocol that adapts validated text-based PROMs to their multimedia versions so that they can be self-administered by patients of any literacy level. This is a schematic of the protocol that we developed. You can see that there are four stages to it. Forward adaptation, back adaptation, qualitative evaluation, and validation. I won't get into the specifics of this, but it is published in PLOS ONE if you're interested in reading more about this. After we had the protocol, the next step was to execute that protocol. In order to execute the protocol, we needed to pick one population. And for us, it was our hand clinic population at the Curtis National Hand Center. And we also needed to pick one instrument, one PROM to start with. So we picked the PROMIS upper extremity as our instrument. So after completing the forward and backward adaptation phases, this is the instrument that we ended up with. So I'm just going to play a demo video here. Hopefully it works so that you can see the features. Please respond to each question by selecting the best answer. If needed, click the speaker icon to hear audio or the play icon to see an image for that question. Are you able to carry a heavy object over 10 pounds or 5 kilograms without any difficulty? Without any difficulty. Are you able to wash your back with much difficulty? Are you able to put on and take off a coat or jacket? Unable to do. In the interest of time, I will pause that there and then move forward. So the most direct and tangible implication of our work is that a multimedia PROMIS upper extremity expands our ability as hand surgeons and researchers to capture patient-reported outcomes in mixed literacy hand and upper extremity patient populations. So I wanted to close my talk just by returning to this cartoon again. So if this is what equality in patient-reported outcomes looks like, I posit that multimedia instruments might be an opportunity for us to shift the paradigm to a more inclusive and equitable paradigm whereby all patients, regardless of their literacy level, are able to provide their data and tell us how they're doing. And the goal of this, of course, is so that we can collect patient-reported data in low literacy patients and other underserved populations so that all patients are able to access the elusive, high-quality patient-centered care that we're all hoping that we can get to one day. So in conclusion, there are patient, clinic, provider, and healthcare system-level challenges to collecting and utilizing patient-reported data. We found that there are nine specific patient-level barriers that need to be addressed. Multimedia format is just one solution that we're developing to address the low literacy barrier, and addressing these barriers is key to achieving equitable patient-centered care. Thank you to the ASSH for funding this work. Thank you to our multidisciplinary big team, which none of this would have been possible without, and thank you for your attention. I have seen no questions in the app, but that may also be a me problem. Are there any questions that people have of those that have been able to hang with us for the full duration? I know it's getting late. I'm going to ask Jen a question, Dr. Walji. I always get hung up on this line of publications that are showing us how depression impacts the way people respond to questionnaires or other patient factors that are either mental health-related or SES-related. I don't really see that extending into the discussion around quality in a way that is clear. How do we almost risk-adjust the questionnaire score if we're going to use the questionnaire? Maybe I'm just not—I haven't learned about it yet, but I wonder if you can share either your thoughts or what you know about that component of it, or if there is even a discussion on how to do that better, because that always worries me, especially in hand surgery, with how susceptible our patient responses seem to be to some of those factors. I think it's a really interesting question, because if we design a study and submit a manuscript, we'll get 50 questions about that. How did you risk-adjust and what are all these baseline things? What are the confounders you missed? What are the mediators and moderators? At least in my understanding of the assessments of these in the quality world, it's much more of a descriptive metric. It seems to me like the early things now where we're integrating patient-reported outcomes, it's more a measure of how many people did you get to complete the instrument, which also is a little bit flawed, right? Because then we're giving more resources to the systems where they had more people complete the instrument rather than the resources that we probably should be directing to systems where they're having more trouble getting people to complete the instrument. So not about the scores per se. So I think if it's an adherence to the measure, I think maybe we're there, but I completely agree with you for the actual score and setting benchmarks. I think we need to be just as rigorous as we are when we're designing perspective-based work studies. And that, I think, is an evidence gap that hopefully we'll fill. Yeah. Hopefully we'll fill. Great. It's a little unfair for me to ask Dr. Long questions because, as you saw, my name was attached to all the things that she was showing, which is a little bit of insider baseball. But if anyone has questions or wants to know more about what we're doing, please ask, grab us anywhere, send emails. It's all, I think, very cool stuff, and I hope you all found it really cool. And thanks, everybody, for your attention and for being here, and I hope you guys learned some stuff. Thanks.
Video Summary
Drs. Oren Franco and Rob Kamal discuss the use of patient-reported data (PRO) in healthcare. They emphasize the importance of collecting meaningful PROMs that reflect patients' goals and values, and the need for clear communication with patients regarding the purpose of collecting PROMs and the interpretation of the data. Dr. Franco shares his experience using a digital system to collect PROMs in his practice, noting the benefits of automation and real-time data analysis, and how it has provided valuable insights. Dr. Kamal focuses on the barriers to collecting patient-reported data, specifically patient-level barriers such as low literacy levels. He introduces multimedia PROMs as a potential solution to address the low literacy barrier and highlights the importance of addressing patient-level barriers to achieve equitable patient-centered care.
Meta Tag
Session Tracks
Practice Management
Keywords
patient-reported data
PRO
healthcare
meaningful PROMs
clear communication
collecting PROMs
digital system
automation
barriers to collecting patient-reported data
equitable patient-centered care
×
Please select your language
1
English