false
Catalog
77th ASSH Annual Meeting - Back to Basics: Practic ...
SYM07: Back to Basics: Scapholunate and Scaphoid ...
SYM07: Back to Basics: Scapholunate and Scaphoid Nonunion Advanced Collapse (AM22)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay, we're going to get started. We actually have a few more people than I thought were actually going to come, so maybe more was trickling. I also realized one way I should have probably titled this ICL was how to do research when you have no patients, or as Chris says, other people's patients. And maybe next year if they continue us, that's what we'll do. But this was kind of an idea is I wanted to kind of think about doing research and sort of a little bit different than how most people have done it sort of classically. So this is alternative research methodologies to address questions in hand surgery. So like I said, kind of classically in how I grew up within medicine and within hand surgery and sort of developing as an academic surgeon, I always thought about this is the only way I could do research. So I looked at it, okay, studies, especially doing pediatric hand, this is kind of how we always have done it. So first case studies, some retrospective case series, and then if you are really good, you could do case match control studies. And then prospective studies, which is almost impossible in pediatric hand, but some people can do that in hand surgery. But this is kind of the only way I ever thought about it. And then I realized that's not the only way to think about it. So there is like, we needed to kind of step back, or I needed to step back and kind of look at this differently and look at it from a wider view. And so this is like the research onion, and you can see from this, just if you look at sociology research, and you look at other ways to think about it, and so going through different training programs, I kind of saw that, and the more I read and the more research I looked at, and even sort of in different fields, there's a million different ways to kind of look at research and ways to do it. And so that's kind of what I wanted to bring to people and sort of allow, sort of maybe they could skip the nine years that I did research and jump to this earlier. So our objectives are to sort of understand the range of these sort of alternative methodologies, understand both the strengths and weaknesses of these differing approaches, and then sort of be able to choose the optimal research approach to address your specific questions, and then sort of become familiar with some of the literature. And so those are the charges to my speakers. So I picked people, so I thought of this great idea, and then I said, oh, wait, I have really good people who can answer these questions for me. So this is our esteemed panel. And I asked them to address the question of how does carpal tunnel surgery benefit patients, and then use their specific approach. So we will hear from Dr. Dee, who will talk about qualitative research approach to this question. Dr. Calfield will talk about patient-reported outcomes, and then Dr. Kazner will look at it through a cost analysis. And then Dr. Jennifer Lane will talk about it, looking at it through administrative database. And so she is our British invitee, and she unfortunately couldn't come. She's very upset about that, but she did a great pre-recorded talk, and we'll look at that. And then at the end, if we have time, we'll do some questions. Thank you very much. Okay, thank you, Dr. Wohl. So Lindley asked me to talk about qualitative research. I know it's a concept that is growing in familiarity to many hand surgeons, but it is still something I think we need to do a little bit of a discussion about what it is, how to do it, and how to interpret it. It's certainly becoming much more common in our journals, and of course, the very rudimentary search on PubMed. You can see there is a bump in articles over the last five years in terms of use of qualitative in our journal. So qualitative research is different from your standard quantitative research that Dr. Wohl showed. So you saw that research triangle, the kind of classic paradigm in which we're used to thinking about our research. Qualitative research is really aimed at understanding the context and the process. In a lot of ways, it's a little bit more like taking care of patients on a day-to-day basis and doing, quote, personalized medicine, but talking to people, trying to understand their particular circumstance, for example, what kind of role carpal tunnel syndrome plays in their life and how it potentially is disturbing their life. You're looking at more of a humanistic approach to it as opposed to our standard quantitative approach. So this article from JBJS is one that I cite often, mainly because it's in JBJS and it discusses the merits of qualitative research, and it's very good if you are looking for something to read. It's a very good primer on it. So on the left, you can see that quantitative research is really looking more at how much, as opposed to kind of the what, how, and the why, and really focusing on the how and the why for qualitative. Now we're very familiar with classic study designs and how we try to isolate and define variables, and it's super important for your standard quantitative research to define your hypothesis a priori and to not change things as your study evolves and perhaps you're doing some interim analysis, whereas in qualitative research, you are given the leeway to look at very general concepts, search for patterns, and then call an audible and say, okay, I want to change my interview guide because I don't think we're getting the data that we were expecting, or this is really interesting. Let's dig deeper. It's almost like investigative journalism. Your toolbox for quantitative methods are surveys, questionnaires, randomized trials. As opposed to in qualitative research, you're really looking at the, you can observe the participant through video, you can take field notes, and then doing these really in-depth interviews, which are a little bit challenging as a surgeon and as a clinician because you're used to a very specific line of questioning to get you from point A to point B, to meet the person, get a diagnosis, figure out a treatment plan, as opposed to these qualitative interviews where you're really trying to understand, it's very open-ended, it's kind of tough as a surgeon. And then for quantitative, the focus is more on prediction, outcomes, generalizability, this stuff that has incredible value in medicine, but you kind of miss a lot of the process and the context and how people get to a decision and how they experience a disease and how they choose a treatment. So Dr. Layden's going to talk about administrative data research. I've done a fair bit of it, as many of you in this room, and you read these quote big database studies, and they have a purpose. They're not going to get you an absolute, you know, is treatment A better than treatment B? But it will tell you what's going on, how frequently is carpal tunnel affecting patients in terms of how it's coded, or how much surgery is being done for carpal tunnel, what are the risk factors for having a carpal tunnel release? It's a stadium-level view. Our standard quantitative clinical research paradigm is more like looking at one section of Busch Stadium and trying to understand, in this section of Busch Stadium, in quote Big Mac Land, we're trying to understand what happens in that group. And then we are banking on the fact that this group in Big Mac Land is generalizable to the rest of Busch Stadium. So understanding that whatever's happening here is probably happening in a section directly across behind the first baseline. Qualitative research is a little different. What you're really trying to do here is dig in on how those people in the front row of Big Mac Land are feeling. Now clearly their experience in the front row of Big Mac Land is going to be different than people are at the back row of Big Mac Land. But you really don't care so much about that. You really want to understand what's going on there, see if there's something that you can build upon. I think one of the biggest things about qualitative research is understanding it is a tool. It is not the end-all, be-all. It should be part of your research toolbox in terms of you wanting to answer a question. Now clearly the good thing about qualitative research is that you don't need many patients. So while you do need some patients, for example for us this was a good strategy when we were building our brachial plexus program and our research because I didn't have five-year follow-up on my plexus patients because I'd just started in practice. But I could interview them, could ask them about their experiences, and make my contributions in that manner. So the example that Dr. Wall gave us, how does carpal tunnel surgery benefit patients? So the traditional papers that you've seen in our literature are retrospective case series. Eventually you work your way up to prospective cohort studies. And there are some randomized controlled trials, for example splinting slash bracing versus surgery for carpal tunnel versus injection. And then administrative data studies. There are many in our literature looking at risk factors for complications after carpal tunnel release, or who's going to go on to surgery. But a qualitative approach is really understanding how does carpal tunnel syndrome affect your life? So there are ways in which you could do, for example, surveys on sleep. But you could ask the patient instead of just a survey, how is your quality of sleep affected? What tell me more about what it feels like to be woken up with carpal tunnel syndrome? So again, this is an example that is, for many of us, yes, we truly understand it. But if you have a less common condition or a less common surgery and you're trying to understand the patient experience, it's incredibly useful. And I will say that doing these interviews and reading them on my own patients for brachial plexus patients has been incredibly informative about how I counsel patients. So for qualitative research, the key steps in study design are developing an interview guide. What are you trying to understand? And you have to think a bit about what you think you're going to find, what you hope to do at the end of it, because it'll help you design your interview guide. And like I said earlier, you can make some changes after a few interviews, and that's totally acceptable in this field. Sampling approach is super important. We'll talk a little bit more about that. And then conducting your actual interviews, again, because we are so trained as clinicians and surgeons to get very deductive, get from meeting people, diagnosis, treatment, it's actually a little easier to have somebody else do these interviews, but you can clearly do them if you're in the right mind state. And then there's the coding and analysis, and we'll show a little more about that. These are interview questions that are from the brachial plexus project that we did, trying to understand the patient experiences, potential delays in treatments, and potential effect of things like their social circle on their outcome, and what they do with their time, because clearly in a plexus injury, you have a lot of time between when you have injury to surgery and eventual recovery. And the nice part is that you can continually revise this based on your pilot studies. So sampling is important. I think this is something that often gets glossed over. There are a lot of ways to sample in qualitative research. For us, for the plexus stuff, it was age of patient, and whether it was a complete plexus injury or partial plexus injury. You could do this for carpal tunnel syndrome. You could do electrodiagnostically negative. You could do CTS-6 scores. You could obviously age or comorbidities. In general, you want to have 10 to 30 participants per cell. That is a lot of patients to do interviews on. You can probably get away with a little bit less, but I think as qualitative research gets more into reviewers' hands, they're going to look for general guidelines, so you do want to stick with what's generally out there. The ultimate rule is saturation, meaning if you do another interview, there are no other themes that are going to come up, and you have to be fairly certain of that. There is a lot of interim analysis as you go along. A lot of ways to sample. We're more used to kind of random sampling in our quantitative research, but then realistically, you can do convenience sampling, meaning whoever comes in the door with carpal tunnel syndrome is going to be in this study. Purposive, meaning, okay, I think that EMG severity is important, so I'm going to purposefully sample EMG severe cases and EMG negative cases, or you could look at things like thenar atrophy. An extreme case where you have substantial thenar atrophy, and you're saying, well, what goes on with these patients, and we saw some papers from a quantitative perspective at the meeting on that. Then the typical case, your standard patient who comes in and has tried, you know, six weeks of, quote, conservative treatment and has failed. The strategy is going to really depend on what you're trying to answer, and I'll move through these slides quickly, but you really have to set things up in terms of conducting your interviews. You have audio recorded, and if you're looking at, for example, how somebody conducts themselves if it's a more devastating injury, you could videotape them. You see that in the back corner. You had a bunch of transcripts, which are incredibly difficult to go through. It's a lot of work, so I wasn't doing the interviews, but I was looking through all these transcripts, and then you're taking the data that the interviewee has given and then trying to find a way to codify this and do this in a systematic and rigorous manner. It's not just how do I feel about it. It's how can I put some rigor behind this and interpret what's going on. So you take all these interview transcripts, and then you look through them, and then you try to find specific codes, so words or phrases that really capture the essence of some kind of feeling, and then you're trying to put those together to then put all the codes together and construct a pattern, and then you group those into categories and themes. So you do this in a group process typically. This was obviously prior to pandemic. We were meeting. We would discuss these kind of larger themes and get these big themes on the board, and you can see the different categories, for example, from this Plexus project, and this is what the board looks like when you're done here, and then you put categories, and eventually you end up putting the codes in, and as you put the codes in, it all starts to come together. It's this really big kind of conceptual thing, and it is really helpful for me to see it on a whiteboard. There are a lot of other platforms in which you can use it, and you can see examples here about the appearance after a Plexus injury, emotion and affect, and healthcare experiences. So these are all ways in which you can group all of the data and have a way to be able to analyze it. Software can be very useful and helpful. You can identify codes and then quickly search for them, and you can see the different categories that these codes can fall under. Qualitative is a difficult thing to try to get into our standard literature. It's getting better, but there are, you know, it's relatively uncommon in at least the orthopedic and plastic surgery journals, but it is becoming more and more accepted. We're a little bit behind fields like general surgery or the other HSR fields. Generalizability is not typically the aim of qualitative research, but you do have to be smart about it and understand that your reviewers are probably going to ask you about how is this generalizable, so you can't be a qualitative purist and say, well, we're not going to be generalizable. You have to figure out a way to frame your results in a manner that you can build upon it. And I view qualitative research as foundational. It tells me what research questions I need to ask next, and then you combine it with other research methods in order to truly answer a question from multiple angles. So thank you for your attention this morning, and I'll take questions at the end, I think, after the entire panel is done. So thank you, Dr. Wall. Thank you, Dr. Diaz, that was great. I will tell you that seeing Chris's research has really been fantastic to see it build, and I started using some of it with our congenital research, and hopefully that will come out soon. We'll see. Yeah. It'll come out soon. All right. There we go. All right, good morning. So I had the easier task here to talk about patient-reported outcomes, which is something that everybody, all the faculty here, and I think probably most of the audience you guys have used and are familiar with. And this may not be so much of an entirely new research approach, but just something to use in your research in a way to answer questions. So what do patient-reported outcomes do? Well, they are really important because they do bring the patient perspective into the equation a little different than my just saying, hey, they can move their hand fine, they're good. What do they think? Do they think they're better? Do they tell us that they're better? So it's an attempt to really measure kind of what matters a bit more, but I'll tell you that none of them are perfect, and we'll go through a few of them this morning. And so there are ceiling effects and floor effects where these surveys, these standard surveys that we're going to use for these patient-reported outcomes may not capture improvement or may not capture someone's function when they're real high or very impaired. And it requires patient input, which is a whole other problem. I guess it's the same with qualitative interviews. When you have to ask patients things, people are different, right? Here's my one patient after his thumb CMC surgery where he's not supposed to be using the thumb, but this brace has like lawnmower cuts into it and stuff, and I'm like, looks like you've been using the thumb a lot. And then you get your other patient that like brings their therapy bunny to the baseball game, you know? These people are very different, and you're going to have to just sort of consider that. So when you're picking patient-reported outcomes, I would just suggest that you do want to be a bit picky. There are a lot out there. There's a lot that will work for any survey, like with carpal tunnel. You could put a host of them into your paper, and they'd all be, hey, have they been used before? But really pick one as your primary one at the beginning of your study and say, that's going to be the one that we're going to make our primary outcome because you might get slightly discrepant outcomes, and you don't want to be at the end like, we don't know what to say about this. So whether it's the DASH, the PRWE, the Boston Carpal Tunnel, or PROMIS, they all are basically like this. They ask simple questions, you know, can you open a jar? Can you write? Can you turn a key? But the details do matter, and there's some very subtle differences in these that'll make it where you really do want to pick one or the other. And here's just a quick example. So I like distal radius fractures. They've been written about, and people have used every one of these patient-reported outcomes when talking about, you know, how their patients did. So here's the thing. If you look at the scoring, or you just, you know, look it up at a paper, they're all about zero to 100. They all seem pretty comparable. However, the Michigan Hand Questionnaire, and I was asking this to a crowd yesterday, has anybody used it? No? So okay, so here's the good thing. The Michigan Hand Questionnaire is actually the only one of these surveys that gives you scoring for your right hand and your left hand. So if you want to compare injured versus uninjured, this is the only one that does it. It also gives you a bunch of subscales, like aesthetics and function at work and function at play. The problem is, is it's 66 questions. That's a booklet. People get tired of filling out booklets, and you probably, they get tired, then you get tired transcribing it. The dash, or the quick dash now, which has taken like a 30-question survey down to 11, the challenge there is it's bilateral upper extremity function. It's can you reach for this thing? Doesn't matter if your right hand's hurt, your left hand's hurt. Can you reach for it? Maybe a little bit less specific. You're not going to be able to tease out some subtle differences. Patient-reported risk evaluation. I'll tell you, those questions on the PRWE look about identical to the dash. I mean, they're really, really close. However, for this one, function is half the score, pain is half the score, and they get added to a score of zero to 100. If you don't want pain to overwhelm what you're finding, this isn't the one you want to use. If it is what you want to find out about pain, then hey, use this one because it's going to get to it a lot better than the quick dash. And then you've got promise, which we'll talk a bit more about, but it's kind of more of a general, either overall function or upper extremity function, definitely not specific to you broke your wrist or you have carpal tunnel syndrome. Okay, so in the end, although in research we're always like, oh yeah, collect more data, collect more data, you've got to kind of get a compromise, right? Again, the burden on the patient, the burden on you, ends up with less useful data. You also have to collect enough stuff that you don't get to the end and go, oh, we really forgot to ask, you know, was this a revision carpal tunnel or a primary? Like you can't forget the key things. So you want to figure your way to the middle here and just realize that when you do choose your patient reported outcome, if you have an outcome measure that's more general, more, you know, not disease specific, you're going to be able to make comparisons to how other groups have done or other disease processes do when you treat them versus something that's more disease specific, like for carpal tunnel, if you use the Boston carpal tunnel questionnaire, you're going to probably have a really nicely responsive survey, but you may not be able to comment at all about how your treatment value for carpal tunnel compares to the treatment value for, say, shoulder arthritis. So we use PROMIS at Wash U a lot. That's kind of become our go-to patient reported outcome, again, not because it's perfect, just because it's easy, it's readily administered, and it goes right into the electronic record. So here's my little example, just for, you know, if you take, the thing about PROMIS is not that the questions are better, but being computer adaptive, it really allows you to shorten the survey. So if you have a standard survey, you have 10 questions you want to give somebody, let's say it's about, you know, hand function, you can either choose one question to be like, hey, I can move my hand, and the final question could be, I could paint a masterpiece or something like that, and you have this range, or you could choose all your questions in a little narrow range of function, like I can paint for an hour, I can paint for two hours, I can paint for three hours, but you've got limitations in your precision and your range of what you can detect, versus anything that's computer adaptive, like PROMIS, it's kind of got, for instance, in physical function, there's 126 questions in the item make, the computer picks question one, who knows what it is, but when you give your answer, if the computer thinks you're at about a level of one function, say, the next question it picks from are those questions that are in that closer range, the next one even closer. So in a few questions, it can get you that very precise answer. In the scoring, like the others, it's pretty standard, zero to 150 is the middle. Let's see, now, this is the thing I learned yesterday, how to get this to play, so we're going to find out. Let me see if we can get this to work. If the internet works this morning, we hear there's problems. This is just to show a quick one-minute example of how we deliver this patient-reported outcome at our center. And if there's not sound, I'll narrate it. Hey, I'm Ryan Calhoun, we're here to see a lonely wall tonight. Okay, let me get you checked in. That's key. All right, it looks like they're going to want you to take this assessment. There's about 20 questions in all, they just want to know how you've been feeling for the past seven days. All right, so what you're going to do is you're just going to lightly tap, and then it's going to go to the next question, and it'll say thank you when you're done, okay? Okay. Thank you so much. Thank you. Happy patients love filling out surveys. So the nice thing is, if it's in your waiting room, then you don't, you know, it's easy to do things while they're waiting for you. And this is just something that we do since we collect patient-reported outcomes on mental health. If people score high on certain things, it makes a little button where the MA can print out a little sheet that we wrote up just inquiring to patients saying, hey, you look pretty depressed today or pretty anxious today, and you know, can we help? So that's the way that we do it, and I think it works really well for us. Again, not the only way to do it. Now let's see if I can actually get this to advance again. Nice. So for concluding on this one, I think that if we wanted to use patient-reported outcomes as the approach to kind of answer a research question about carpal tunnel, it just depends on the hypothesis a little bit, but like I was telling you, you can kind of fit your patient-reported outcome to your question. So if we wanted to do something with idiopathic carpal tunnel, and say carpal tunnel release can improve Boston carpal tunnel question symptoms at five days, hey, you can use that one. As a hint, there's been a few research projects showing that PROMIS doesn't change at five days for carpal tunnel, so wouldn't be your best thing. You can do questions like, you know, in somebody with advanced carpal tunnel, say based on nerve tests, so you have your criteria for who you're gonna study, carpal tunnel release will improve upper extremity scores, and you can say to a clinically relevant magnitude by three months. So again, as specific as you can be, you can really lead yourself to answering a nice question. Or you could say, you know, mild carpal tunnel, carpal tunnel release will improve patient-rated risk evaluation pain scores by six weeks. So I would just recommend that you fashion your questions to specifically what you wanna look at. I like this one, employed patients with carpal tunnel syndrome, time off work, following carpal tunnel release is associated with depression, probably true. But again, think about your research question, think, not really, don't base your question on the outcome measure you have, but, you know, choose your outcome measure to answer what you want to ask, I guess is the way I would think about it. So final thoughts, I would just recommend, other than doing qualitative research, try to incorporate patient-reported outcomes if you're doing clinical studies. Be thoughtful about the one that you choose. And then if you're gonna pick one, pick one as your primary outcome, explain why you did it, and make sure you're powered for your statistics. Thanks. Thank you. I think Utah and then WashU kind of followed really gravitated towards using PROMIS. And so as young investigators at these institution and others also, having PROMIS has really allowed a lot of our students and our trainees to do a ton of research, which has been a great way for them to sort of move forward, and also some of the young faculty. I have some technical difficulties down there. Oh, good. Where they're not uploading, but I wonder. Let me click it one more time. Oh, yeah, because you're not in it. Yeah, perfect. They said it's just a... Okay, Dr. Kazmers is gonna talk about a cost analysis approach. He's going to directly upload from his FOSTRA. Apologize, there's apparently some technical issues with the internet connection in the speaker-ready room, so. We'll get it going, though. Yeah, thanks, Dr. Wall, for having me here. I appreciate the opportunity to kind of go over the question at hand in the context of a cost analysis study format. And we'll provide a few examples. All right, so the cost analysis approach to the question, how does carpal tunnel surgery benefit patients, kind of go over some basics here. So we've all been asked by patients, how much is this carpal tunnel release going to cost? And sadly, it's hard to answer. This is several fold the reason. Definition of cost is not straightforward in health care. We'll go over some basic definitions, which will help clarify things hopefully. And then in general, there's just a lack of transparency on how much things cost, how much you're reimbursed, you're paid, and all of the above. So let's get into some definitions here, as the definition of cost is context dependent. Often most pertinently, we think of things in terms of societal cost. It's kind of like the lump sum of everything. That includes the surgeon, anesthesia, and facility payment, plus the indirect costs related to patients' lost wages from being out of work for care and recovery. However, there's a few other ways to look at cost. If you're a health care system, you're going to care more about the amount that you actually spent in supplies and materials to deliver care. If you're the payer, this is the amount that you pay out. And you might not care as much if you're the payer about what the patient paid out of pocket. But from the patient perspective, lost wages and out of pocket costs are kind of what those folks think as the definition of cost. So it really varies depending on who's looking at the question. The main ways that costs have been studied in the context of carpal tunnel release in the recent literature are twofold. One is a focus on factors related to surgical encounter costs. You know, this does not necessarily include preoperative testing, postoperative care, but actually just focusing on the surgical encounter itself. And then number two, the most rigorous way to look at economic analysis or costs in health care is the formal cost effectiveness or cost utility analysis that incorporates cost of care related to complications and everything that follows over a period of time and also factors in the outcome of each patient. So we'll go over a couple examples here. The first example is from the surgical encounter perspective. There's been numerous studies showing endoscopic release costs more than open. Use of the operating room costs more than the procedure room. There's been a lot of these in the past five to ten years, but we'll just kind of go over this one briefly here. So this study that we did, it looked at direct costs of care, so kind of from a health care standpoint, and we used WALANT or open release in the office setting as the reference group with a relative cost of one. You can see if you move that into the operating room with local only, the direct cost goes up about sixfold. If you involve the anesthesia team to do a beer block or sedation, we're looking at ten or elevenfold increase in direct costs. And then any form of endoscopic release, it might vary from 12 to 17 times more than doing the procedure under WALANT. We also looked at payments, because that's kind of another perspective that the payer and also the health care system is interested in, and those results paralleled these. So what can you conclude from one of these cost studies that focuses on the surgical encounter only? You might surmise that the value of WALANT might exceed that of doing an endoscopic release or an open release in the OR, because basically the benefit, open or endo, that's kind of historically been shown to be about the same over time in a bunch of studies. So the top is the same, but you have variability in the cost, so you're kind of making some assumptions there. The downsides of this approach is that different facilities will likely have different payment or reimbursement rates. They'll have different contracts, so materials will cost different amounts from place to place, so that can limit the generalizability. And then this type of study also does not incorporate how did these patients in the study do. Did any of them have a complication that required further treatment and further cost? So that's the downside. The upside is it's relatively straightforward and easy to do and easy to understand. But if you're looking to bundle all those concepts together, you can do the second study format, which is the cost effectiveness or cost utility analysis. This allows you to basically compare bang for your buck amongst multiple treatment options or treatment strategies, say for carpal tunnel. And it does so by calculating the cost of treatment, including the index treatment, cost related to addressing complications or follow-up care, and then you divide that by the incremental effect or outcome, and then you basically get cost per additional unit of benefit for each of your treatment strategies, and you can kind of organize them in order of cost effectiveness to get some good information there. So in this context, cost may be assessed from two different perspectives, typically well-done studies look at both of these perspectives. The first is the societal perspective. This includes the direct cost of all of the care, plus the indirect costs, usually lost wages, lost productivity, et cetera. And then the second perspective is that of the healthcare system or hospital system, and that drops the indirect costs that the patients suffer and just looks at the direct cost only of care provision. The effectiveness part of this, there's a few ways to do it, such as, you know, years of life saved, but in orthopedics, typically we're looking at quality adjusted life years or QALYs. What is that? It basically reflects the quality of life and the quantity of life together, and then these are based upon the patient's health status, which can change over time. So a cost-effectiveness study, if you have two treatments that lead to the same end, but one gets there faster, the one that gets there faster will probably have better QALYs or more effectiveness than the one that is a slower recovery. So that's important to note if the trajectory of recovery is important to your study. Clearly the cost-effectiveness analysis, it's just meant for this carpal tunnel scenario where open release has some pros and then endoscopic release has some pros, and it's not like one treatment or the other is clearly the obvious winner. It's unclear from this list of, you know, pros and cons between the two, so this is a perfect thing to study. That's what we did in this recent project, formal cost-utility analysis, looking at basically three common treatment strategies. I realize some people do endoscopic release in the procedure room, but we didn't look at that. So we looked at open release in the OR and procedure room, as well as endoscopic release in the OR. This type of a model allows for you to determine the effect of a variety of different complications pertinent to the surgery, and then if, you know, further treatments are needed that has implications for the patient's effectiveness as well as overall treatment cost. And then clearly when you do a carpal tunnel release, symptoms sometimes resolve or fail to resolve. You can have recurrence or persistence, and the model allows for revision in those instances, again with implications on function and effectiveness and cost. Sorry for the busy slide. Typically the first result that's reported for this type of study is a base case analysis. This takes all of the input parameters from the literature at face value. You don't vary them. It's just what was the point estimate, and then you run the model based on that. Basically this messy slide just points to the fact that open release in the procedure room had the lowest cost, and it was the most cost effective from the societal perspective. And then when we look at the healthcare hospital system perspective, we see the same thing. Next, you're typically varying the input parameters beyond what you get in the base case analysis, and that's done with a probabilistic sensitivity analysis. Basically the multiple input parameters that you have in the model that have a known variance or distribution, they're drawn from a distribution, and the models run multiple times, so you can get slightly different cost, slightly different effectiveness based on variability of everything in the model. When this was done here, open release in the procedure room remained the most cost effective and specifically over 50% of the iterations of the model show that this was the dominant treatment strategy from both perspectives. Dominant meaning lower cost and better outcome. That's kind of the slam dunk category there. You can also do one way sensitivity analyses as well. This can be really important to assess the robustness of your model, and basically you do this by artificially adjusting the input parameters to excessive levels that you would never see clinically. And basically this is helpful if there are certain parameters that are not well elucidated in the literature or for certain key parameters in your model, and you want to show that small changes don't flip your result. When that was done in this context, you could lower the surgical cost of endoscopic release to free, zero, that's never going to happen, and the study results don't change. Similarly, you can reduce the days out of work to zero days after endoscopic, again, not plausible, that didn't flip the result. So in conclusion from this type of a study, based on the sensitivity analysis, we conclude that the model is robust to the inputs, the results don't change based on mild fluctuations or great fluctuations even, and then this allows us to conclude that open release in the procedure room is able to minimize costs, both from the hospital perspective and from societal perspectives, while providing favorable outcomes compared to the other options. Limitations of this methodology in general include oftentimes the lack of robust input parameters in the literature for different things that we want to model. There's a lack of consensus on the willingness to pay, what society is willing to pay for one quality-adjusted life year, and you can debate about that, and then oftentimes the direct costs are extracted from Medicare payments, and lost wages are used as a surrogate using the U.S. median wage. So again, high or low earners, you could maybe have a different conclusion, or if you're using different cost sources, that could change the results a little bit as well. To wrap up here on economic analysis, limitations in general in this realm, there's always what benefits the surgeon the most, what benefits the patient the most, and what benefits society the most. Sometimes that can overlap, but sometimes it doesn't, and then taking these results and changing your practice or how to apply this, and there's a lot of inertia in changing practice and changing a patient's mind, too, as patients have been shown to not really be influenced by societal costs when making these decisions for carpal tunnel release. They kind of want what's best for them, and then society is kind of an afterthought, so you have to reconcile that. Similarly, patients prioritize different things, and it's, you know, patient-to-patient discussion. Each one's different. Some may prioritize reducing out-of-pocket costs, and then others are willing to pay more for something that they perceive might allow them to recover faster. So that's kind of cost analysis in a nutshell, and I appreciate your attention. Thank you. Yeah, that's a good question. I think, you know, starting out, it's definitely easier to do the cost study on the surgical encounter itself. The cost utility studies, those are some of the more tedious and lengthy and difficult studies that I've done, the handful of things that I've done. So I think don't bite off too much, more than you can chew, so start simple, and then it's just harder to find costs, right? Like some people use charges, some people use payments. I think if you kind of stick to something mainstream like Medicare payments, comparing multiple surgeries or treatment algorithms, that's probably the way to start. Okay, thank you, and congratulations on your productivity with that. So this is Dr. Lane from the UK. Hi there. I'm Jenny Lane. I'm an academic orthopedic registrar at the University of Oxford, and it's a great pleasure for me to talk to you today about administrative database research. I'm just really sorry I'm not with you, but hopefully you can still get a flavor of what we've been up to here in Oxford that will help you with your research question and your planning of those unique or different types of methodologies. Right, so let's get going. I've got no disclosures. I think wherever we go in life now, we're surrounded by data, whether it's social media, our choice at the supermarket, or scrolling on the internet, everything we do is collected. And whilst this can be a great cause for concern, this can also be of huge benefit to us as surgeons in order to better understand the role of surgery in the treatment that we give. When we think of the research question in this session, administrative data gives us the opportunity to undertake research that draws on the everyday life experience of all patients interacting within a particular healthcare system or setting, and that can add value onto the more focused clinical research that occur in other studies. But what is it? And to me, administrative data is defined as a byproduct of medical care in a digitalized world. It can also be called routinely collected data or real-world data, to kind of give an expanded term. And it can come in various different forms defined by the care setting in which it's generated, the method of data capture, whether that's an electronic healthcare record or an insurance or claims system, or by the healthcare system that it's representing. And it can represent a smaller population in one region, or it can represent a nationalized healthcare system or insurance data set. And if we think of some examples, this can be from electronic healthcare records in one institution, or it can be a cut from Medicare or Medicaid, or it could be a whole data set such as what we have in the UK, where we have primary care data generated from general practice as the gatekeepers to secondary care within the NHS. And that's generated within the CPRD data set. Or whether it's a collaboration with multiple data sources provided by a data provider, such as Pell Diver, or a consultancy firm such as IQVIA, or within exciting innovations where data sources can work together to generate federated network analysis, such as the Odyssey community that I'm part of. As with all research, there are strengths and weaknesses to using routinely collected data. The main benefit is that it can be used to identify real-world practice and contain patients who may be outside of inclusion criteria for trials, such as those who have multi-morbidity or are at extremes of age. And you can have a longer follow-up to identify outcomes that may not be possible to financially explore in other research studies, and can investigate factors that you can't randomize on, either due to them being demographic factors or associated with treatments that have already got established efficacy, that make them just not ethical to randomize against. But you always have to remember that the data was not generated with research in mind. And you have to be very clear, and it's very important to think about the assumptions that you're making when you're looking at a particular data set. It can be based on coding systems that only identify certain conditions that may have biases, or it may only include certain procedures based upon where it's generated. Sometimes this can lead to problems in surgical epidemiology, as it can only identify certain surgeries and conditions, or it may not have sufficient granularity of indication, which causes the potential for confounding or inappropriate associations to be made. Overall outcome, as we want to investigate in this study, can only be a proxy outcome, such as complications or revision surgeries, because there are very few routinely collected data sets that currently include patient-reported outcome measures. I thought I'd walk you through an example study looking at outcome following carpal tunnel surgery in the NHS in England that I've undertaken. And I did that using a data set called Hospital Episodes Statistics. And this includes all admitted patient care in England that includes anything that is also a day case. The data extract spans 19 years, and prior to undertaking it, I led a validation study to explore the coding that is used in this data set to ensure it represented true cases, and that my case definition was as appropriate as it could be. The data includes demographic details around social deprivation, as well as ethnicity and comorbidities, and it will link by the NHS number to all other episodes of care. So providing the episode was undertaken in the public healthcare system, it will identify revision surgery or complications that are treated in any NHS hospital in England. Overall, there are over 855,000 surgeries undertaken during the time period, and around 29,000 revision surgeries were identified. With a serious adverse event rate identified within the data set of less than 0.1%. What was really interesting was to look at the incidence of surgery in both men and women, and the age at which they underwent surgery. So the red line here showing the trend in women and the blue line in men. And what you can see is that there was a peak in incidence of surgery undertaken in women around the age of the menopause, and in both genders after 65. All surgery in the NHS here is free at the point of care, so it's interesting to see that a trend in increasing surgery after retirement age occurs. It's difficult to know what drives that, but one wonders whether it's related to social factors rather than disease etiology itself. This plot shows the benefit of the longitudinal nature of data in a routinely collected data source. As you can see, the trend in surgical incidence over time here over the 19 years, we can see that there's an increase and then there's a leveling off in the number of surgeries undertaken. And here I've compared it to what we found when we looked in the same data set to look at surgeries for basal thumb osteoarthritis. And whilst the incidence, the y-axis on the right-hand side is much smaller than that for carpal tunnel decompression surgeries, what you can see is that there is just an increasing trend in surgeries undertaken over the time period. And it's likely that this is due to the political rationing of carpal tunnel surgeries that we've seen in the NHS during this political time that has not been seen for basal thumb osteoarthritis. And talking about basal thumb, well, that's a story for another day. If we go back to our main study here, looking at the outcome from carpal tunnel decompression surgery, we can see that the risk of revision surgery over time here is shown in this Kaplan-Meier plot. And you can see that the overall risk of revision was low at around 3%. With most undergoing revision within the first year, with little revision after this. But the rates being higher for men, the plot in blue, than compared to the women in red. We then investigated the risk factors associated with undergoing revision surgery. And this forest plot shows the results of the multivariable Cox regression analysis. Here, what we found is that adjusted for all other factors, there was an increased risk of revision surgery in those who had a higher level of overall comorbidity, as defined here by their Charlson Comorbidity Index, which is a little bit like the Elixhauser. And also for deprivation. So those who were in more deprived groups in society had an increased risk of revision. This may relate to an interaction with a response to surgery for those who may have greater levels of comorbidity undergoing surgery late, or recurrence due to occupation potentially, as is seen with the trend for those who have an increased risk of revision surgery in the greater deprivation groups. But here is where we see a limitation in routinely collected data, because fields like occupation and indication for surgery are not included in the data set. So we can only consider the potential reasons for these trends being seen. This study was published last year in Lancet Rheumatology, and it's free to access. So if you hold the QR over, you'll be able to look at the paper and have a look for yourself and think about the things that we found. It's been a great honor presenting to you today, and I'm just so sad not to be with you and all the restrictions that happened due to the pandemic. But I hope that in future years, I'll be taking your questions in person. I'll be tweeting right now, so do reach out. That's my email if you're interested in things in future. I hope to see you soon. Okay, that was incredible. I would like our speakers to come up, and then I know everyone's burning questions. The first question I want to kind of throw out there is, I don't know if everyone saw this on my first slide, that every single one of these people has extra letters after their MD. So do you guys think that, or what did you benefit from sort of seeking out masters or, as Jen got, a PhD to help do research? Is that necessary? Is that something you recommend? What did you get from that extra education that has helped just sort of talking to people who are doing very successful research, and what are your thoughts on extra education for this? Full disclosure, you also have letters after your MD. Yeah. So for me, I did my MPH when I was in medical school, and that's what drove my interest. I was really interested in health disparities, still am, and I did a lot of administrative data based on the training I got there. I think that I needed additional training in qualitative research. It was the perfect building block for a career development award, for a K award, so I used that as a training component for the K. So I didn't get a degree in this kind of work. I think that doing the additional training helped me round up my skillset, and I like being able to pivot between all the different methods that were described earlier. I would say that, so I did a little extra training after becoming faculty, and it was really at the urge of Dr. Galperman, who was our chair at the time, and that had to do with him saying that you need to find a niche, and kind of find a spot that you can really claim, and if you just want to do clinical research, which isn't basic science research, which is what he really liked, then you need, still likes, then you needed to do something to kind of show that you were serious about it, so that's why I did it. Now, the benefit afterwards, I think, has been really helpful to me. I'm not a statistician. I can't do the higher level things that they can do, but I think it's allowed me to speak their language a little bit better, and interact with them, and understand what I'm asking them to do, and what they're giving back to me, and honestly, if you guys, in academics, it can go a long way towards just helping you out with what you want to produce, and it's helped me immensely in my career with the Hand Society and such, but that was all unforeseen. It just kind of worked out, so I was really fortunate. Yeah, and then I did a master's in engineering, but that was way before medical school. I don't think it's mandatory to have kind of extra credentials, but it definitely has helped learn the language of the statisticians, and to be able to do some basic stuff, and then to be able to communicate better with the people who do the higher level stuff, so that's my take on it. Just the thought process is helpful that you get from the engineering background, I think, but. I'll open it up to the floor. Any questions? Yes. Thank you. Dr. Comfey, could you talk about whether or not you interrupted your practice to do your master's, and secondly, I'd like to hear about your comment on informed consent for your outcomes surveys. Is that just a routine thing, or do you, knowing that you're a researcher? Got it, great question. So I did not really interrupt my practice, but it did curtail what I was doing. So I did the master's over three years after joining faculty for like a year or so. So I wasn't fully busy, and the courses were largely in the late afternoon, say 4 to 6 p.m. at our institution. So I did have to end clinics early, I did have to end the OR early to be able to do that, and I did two or three courses a semester spread out. So yes, there was definitely a sacrifice involved, although in all honesty, again, when you start, you're not usually booked all the way through the day. But that's how I incorporated it into it. So I'd say it's a easy, or at least it's a feasible thing for people to do early in practice. In terms of the consent, so for our patient-reported outcomes, we do not have a separate consent for that. The scores are visible as soon as they turn in the iPad, and oftentimes I look at them when I walk in the room. So we consider that part of routine clinical care, and because we've considered it that way, we do have to get specific IRBs every time we want to pull it and use it for a project. So that's how we've approached it. Other people may be able to get a research consent upfront, and then treat that as an umbrella IRB type of thing, and then just have free access to things. But we've gone with the specific IRBs for each time afterwards. How does Utah do it, the same way we do, or? Yeah, seeing that video, you guys do it in a pretty slick fashion. We're going under a transition right now, and we might have to talk after to get your thoughts. But the video's on YouTube, you can pull it up. But we've typically done it with iPads, same as Dr. Kelphy and other panelists. Standard of care, kind of a pathway there with an iPad, the QuickDash, and some of the PROMIS instruments, and some other miscellaneous questions. And IRB-wise, you do it IRB per each study? Yeah, we have an overreaching umbrella IRB to do anything retrospective with that data, but we have to fill out a short form and get each one approved. And I'll just say that, although this is with PROMIS, we used to always do the same thing with the QuickDash, but that was only to new patients. And again, they just checked it off, and then it got scanned into the record to be looked at later. But it's been our same approach throughout. Yes? Thank you for your time. That was a great talk. For those of you using the PROMIS scores, do you find, have you encountered any issues in practice with interpreting the normalized score? Any issues? So, I haven't had any specific issue. What do you mean by interpreting the normalized score? So, my understanding, and I may be wrong, is that 50 represents the normal American citizen. Correct. So, when you issue it to a disease-specific population, that may be less or greater than what you're doing. Sure, so I guess I haven't had it, I haven't viewed it as an issue, but you're right. The 50 is this imaginary person, if you could picture the perfect meld of the US in terms of all of its census breakdown of sex and race and everything else, that's where you kind of get to the 50. But we've just looked mainly at kind of where people start and where they end after treatment. I'll tell you that if you look across our entire department, it's amazing how consistent things are. It seems to me like people present to orthopedic providers with a pain score of about 60 and a function score of about 40. They're about a standard deviation worse on each of those. And maybe it's a little bit worse for our spine colleagues, but when I look across our department, it's just about like clockwork for most people. I do have, oh, sorry. I just wanted everyone to point out based on their sort of your topic that you presented on, what resource did you have to find to do that type of research? Like, what did you need money for or did you need money? What resource did you not have when you started that you needed for that type of research? I needed a mentor and I needed to take a course. Happened that the mentor was one who taught the course as part of a master's program. And who was your mentor? In your department? Yeah, so it was a mentor at the medical school, not in the department. So somebody who's a very specific methodology mentor and she teaches a qualitative research course in one of the master's programs at Wash U. So it was easy to build that in, to enroll in that course, take the course, but then also have some separate mentoring. I would say for the patient reported outcomes, our limitation was cost of getting PROMIS implemented. My biggest suggestion if you just wanna use patient reported outcomes is start easy. If you have no, if there's no money to be used, paper forms or the quick dash are probably my second choice. But if you're at any sort of academic center with access to RED Caps, which is typically free, you can use the PROMIS instruments through RED Caps. You just won't get them into your electronic health record, but you can make a research database and that doesn't usually cost anything. And for my behalf on the cost analysis part, the main thing that I needed was like some high level statistical help, you know, some funding to kind of float that. Even the simpler project was using some regression models that I'm not able to do on my software. And then the cost effectiveness analysis, it's very, very time intensive. You have to know how to like program tree age and do all sorts of things that I don't know how to do. So I think like any research finding like the right collaborator to make it work, that's always key and that was no different with these. If I could add one other thing, I think when you think about, you know, doing research and if you wanna do it and you're at an academic center, especially if you're young, it's hard, but you have to have protected time. That's the other thing you didn't, you don't necessarily have in every situation. You gotta have like a day a week. You can't, this can't work all night, all weekend and Saturday and Sunday. They can't, it just, if you wanna be productive, you gotta talk to your chair and say, look, I need this time. It's gonna have to be a few less RVUs. That's a Dr. Gelberman philosophy for everyone. I'll say also for what Nick was commenting on for the cost analysis, a medical librarian is incredibly helpful because you have to really look at the literature and that's an important person to have on your team. Sure. So, I think that's a great way to go. The Hand Society, we had been working, I had been chairing a task force trying to see if we could establish some registry efforts. Those are not going to happen, unfortunately. It ended up being cost prohibitive. Our society is not big enough and doesn't have enough funding to really run a registry. And we did a lot of looking at other specialty societies and talking to folks about who we could partner with. That being said, there are some grassroots efforts that have done great things with registries. So, Dr. Wall and Dr. Goldfarb have done a really nice job starting from basically no funding to start this CUD registry that is now probably like most registries, seven years old and very productive, but it takes a while. If you're interested in pursuing stuff like that, though, the Hand Society is interested in supporting it. They're just not going to be able to run it. So, things that are happening, there's a multicenter group trying to look at SL ligament injuries headed out of HSS with about 18 centers now. And the Hand Society, I don't know if it's been announced yet at this meeting or if it's today, but they're going to have a round of funding for $100,000 three-year funds for people trying to start their own registry and multicenter efforts. If you can get together a team that are interested and motivated and can put together an application, there may be a way to get some funding for that. Will they retroactively support like seven-year long registries? There is not going to be a retroactive. If you could come up with a reason why you need it for another three years. But it's something that the Hand Society is interested in and something that I agree with you would have some real value, but I think it's going to have to be sort of initiated by individuals in certain institutions. Chris, you're trying to sort of start a similar brachial plexus registry, correct? Yeah, so we have a prospective multicenter cohort for brachial plexus. Don't underestimate the amount of bureaucracy that it takes to get through something like this. So, we started with an AFSH grant for three centers. It took a year to get three centers through their individual IRBs. And then we pivoted towards NIH funding, and when we got the NIH funding, they require a single-site IRB, so that took an additional almost a year to get that to happen. So, it takes a long time. There are tons of benefits from doing it. It's a lot harder than people think. One more question, and then we'll probably close, but everyone's available. So, I don't know what Nick is collecting exactly. I think we're pretty similar. We do PROMIS upper extremity function, physical function, and then we do pain interference, anxiety, and depression. Those last three are kind of our mental health side of things. In terms of adjusting the range, we haven't done any adjusting. I mean, there's limitations in the range that are there. There's certainly some kind of cutoffs. You don't quite get to 100 or down to zero. But we also don't do depression in kids. We do peer relations. Peer relations instead. There was a little bit of issue in the beginning. Yep. Oh, no. The scoring all just happens with the way that it's all automated in score. So, really there's no process of you scoring it or anything like that, and it automatically puts it on that normalized 50 mean and standard deviation up or down by 10 points. Nick, do you guys do the same surveys or the same assessments? Yeah, basically the same. We dropped the depression, CAT, just because they wanted us to reduce the question number a little bit, but otherwise the same. Thank you all very much. Appreciate your attendance. Thank you guys so much. That was so nice of you guys to show up this morning. Thank you.
Video Summary
The video transcript discusses alternative research methodologies in hand surgery, specifically focusing on carpal tunnel surgery. The first speaker discusses the use of qualitative research to understand patients' experiences with carpal tunnel syndrome. The second speaker talks about patient-reported outcomes (PROs) and how they can provide insights into the benefits of surgery. The third speaker discusses cost analysis and the need for transparency in healthcare costs. The panel discussion also covers topics such as administrative data analysis, registry studies, and the importance of collaboration and mentorship in research. Overall, the transcript emphasizes the importance of considering different research methods to gain a comprehensive understanding of the benefits of carpal tunnel surgery.
Meta Tag
Session Tracks
Arthritis
Session Tracks
Wrist
Speaker
Arnold-Peter C. Weiss, MD
Speaker
David G. Dennison, MD
Speaker
Kevin J. Renfree, MD
Speaker
Thomas W. Wright, MD
Keywords
alternative research methodologies
hand surgery
carpal tunnel surgery
qualitative research
patient-reported outcomes
surgery benefits
cost analysis
healthcare costs
administrative data analysis
collaboration
×
Please select your language
1
English