What you see is what you say
FIU and Baptist have together found a way to improve the accuracy and speed of reading radiology scans.
Some diseases and conditions hide inside our bodies: calcified arteries that could cause a heart attack, infections, slight fractures in a bone or even cancerous tumors.
Special imaging, like x-rays, MRIs or CT scans, makes them visible. But what’s revealed in those bluish-tinted black and white pictures takes a highly discerning eye to decode.
Radiologists, like detectives looking for clues to close a case, know what to search for in those black-and-white pictures of our brains, lungs, arteries, bones and muscles. They are specialists at seeing what should not be there and create detailed reports for doctors to confirm a diagnosis and guide treatment.
For Dr. Ricardo Cury, chair of radiology at both FIU and Baptist Health, it’s not uncommon to examine anywhere between 200 images for simple cases like x-rays to 20 for more complex MRIs. That can add up to an average of 50 to 60 reports daily.
Advancements in technology, like AI-powered speech-to-text conversion tools, have helped radiologists dictate and complete these important reports for the patient’s healthcare team, however not without sometimes introducing mistakes into the transcription that can have ripple effects.
“Every error is significant and can lead to inaccuracies in interpretation of findings,” Cury explains. “Our goal is accurate, high quality and highly specialized interpretations. More accurate reports benefit patients.”
That’s why Cury, as a part of a collaborative team of researchers from FIU’s Herbert Wertheim College of Medicine and Baptist Health, investigated the accuracy and effectiveness of a new, more concise, but still structured, easy-to-read reporting style. It uses voice commands to dictate but only focuses on documenting abnormal findings radiologists are trained to spot and doctors need to know about. Basically, what they see (that shouldn't be there) is what they say.
The results, published in the European Journal of Radiology, showed this new style was better than the checklist style reporting approach commonly used by radiologists.
Not only did it improve radiologists’ focus on various diagnostic images but also cut down on inaccuracies and dictation time — a win for both patients whose treatment plan hinges on accuracy and for radiologists with a large workload.
“I saw firsthand how this new dictation process significantly enhanced radiologists’ focus on the diagnostic images,” said Mona Roshan, an FIU third-year medical student and the study’s first author who worked under the guidance of Cury, her mentor. “The eye-tracking software validated our results, because it allowed us to create heatmaps showing that radiologists were directing their attention to the imaging. This is better for the patient, as a heightened focus on the images means that radiologists are dedicating more time to interpreting potential abnormal findings, ultimately improving patient care.”
Cury, who has worked as a radiologist for over two decades, knows standard checklist reporting style well. He describes it as a sort of ping-pong game of attention between two separate screens.
One displays the diagnostic images, the other the spoken findings being transcribed into written words to populate different parts of the report. What’s most laborious and distracting is having to also be self-editor and correct mistakes that happen because the voice software “hears” something different than what was really said.
“With the method we’re proposing, you’re 100% focused on the images and you’re going through the checklist in your mind, what you’re trained for,” Cury said. “The beauty is you’re only dictating the abnormal findings and only in the end when you’re finished interpreting the findings, you can check the report for accuracy.”
Researchers shadowed experienced board-certified radiologists who wore eye-tracking goggles while looking at a variety of x-rays, MRIs and CT scans. In total they looked at more than 150 studies —76 for the new dictation style and 81 for the standard checklist. With the new dictation style using voice commands, the average dictation time was reduced by approximately 50% and didn’t impact total interpretation or examination time.
The team says this research is a starting point for future work. As conversational and generative AI continues to evolve, the new reporting style could boost reporting times and improvements in the overall efficiency of radiology reporting and increase accuracy.
For medical students, like Roshan, who are about to enter the field, the research is also exciting for other reasons.
“I loved that this research provided me the opportunity to learn more about radiology in general from this experience,” she said. “I can’t wait to use and put this reporting style into practice in the future, because I saw the impact it can have.”