Research Spotlight

Artificial intelligence could help medical students get better feedback

Vital Signs » Summer 2022
brain transforming into digital squares

Studies have shown that doctors can be more effective when they ask better questions and let patients do more of the talking. But, too often, doctors fall back on the practice of educating and advising patients when the situation calls for better listening.

Researchers at BSOM are now using the latest in artificial intelligence technology—a computer application they call ReadMI—to provide better feedback to medical students and residents on their use of motivational interviewing (MI) when talking with patients. The researchers say their work “has the potential to transform MI training” and improve the quality of health care.

MI is patient-centered conversation in which physicians speak less and ask more open-ended questions designed to strengthen patients’ motivation and commitment to make healthy lifestyle changes. Physicians using MI guide patients toward taking charge of improving their health.

Paul J. Hershberger, Ph.D., a BSOM professor of family medicine, noted that the U.S. “spends more on health care, but we actually have poorer outcomes than most developed countries, particularly in chronic disease morbidity. That suggests we should pay more and better attention to patients’ responsibility in their health care.”

Motivational interviewing doesn’t preclude doctors from educating and advising patients, but it puts more emphasis on leading patients to express their health concerns and make plans for improvement.

In addition to open-ended questions, MI emphasizes the use of reflective statements designed to elicit a response, such as “You’re very concerned about the possibility of developing diabetes,” followed by a pause for a patient response. Another MI technique is asking patients to use a scale of 0–10 to rate importance, readiness, or confidence about change.

Hershberger said MI has been shown to be effective in motivating patients since its introduction in 1983, but “it’s never been widely implemented. Telling patients what to do seems easier and gets modeled in the clinical setting.”

MI training can be time-consuming, Hershberger explained, because it involves transcribing and evaluating interviews to quantify doctors’ use of MI skills.

That’s where ReadMI comes in. BSOM researchers, in partnership with the Wright State College of Engineering and Computer Science, developed the training tool to automate the process of transcribing and evaluating MI interviews. ReadMI stands for Real-time Assessment of Dialogue in Motivational Interviewing.

ReadMI uses the low-cost, highly accurate Google Cloud Speech app to transcribe interviews, and analyze the transcripts to evaluate how well doctors are using the MI approach. ReadMI “produces a spectrum of metrics for MI skills evaluation, including the number of open- and closed-ended questions asked, provider versus patient conversation time, number of reflective statements, and use of a change ruler (the 0–10 scale)...eliminating the need for time-consuming reviews of recorded training sessions,” according to the scholars’ research.

Automating the transcription and some of the analysis of doctor-patient conversations greatly speeds the process and frees faculty to evaluate the spirit of the conversation, which can’t be graded by a machine, Hershberger said.

The initial BSOM research in 2019 used ReadMI to analyze 48 role play conversations between simulated patients and residents in family medicine and internal medicine. The simulated patients presented prepared scenarios to residents, including a patient requesting more opioid pain medication and a member of the clergy using marijuana to cope with stress.

The conversations were transcribed and analyzed by ReadMI. Five human MI training facilitators also read the transcripts and rated the physician utterances. Overall, there was moderate agreement of 33.3 percent between ReadMI and the human raters, but ReadMI was more than 90 percent accurate in producing transcripts, discerning open-ended and closed-ended questions, and isolating physician- and patient-speaking time. It was weaker in recognizing reflective statements.

“A significant, negative correlation was found between physician-speaking time and the number of open-ended questions asked,” according to the study. “That is, the more time a physician spends talking, the fewer open-ended questions they ask.” That was also true of other MI metrics.

A forthcoming paper based on a randomized, controlled trial of 120 medical students will report that students asked significantly more open-ended questions if ReadMI was employed in interview feedback, Hershberger said.

Yong Pei, Ph.D., LexisNexis Ohio Eminent Scholar and professor of computer science at Wright State, helped to develop ReadMI. He said ReadMI was especially accurate in online interviews, which makes it a good fit for telehealth visits.

In the future, project leaders Hershberger, Pei, and Dean A. Bricker, M.D., a BSOM associate professor of internal medicine, hope to see ReadMI become even more automated, accurate, and user-friendly for faculty. They also hope for real-time analysis so doctors can see their MI metrics on screen during consultations.

BSOM introduces motivational interviewing during students’ first two years, and provides training in the family medicine, clerkship and in several residency programs.—Tom Byerline

Last edited on 06/06/2022.