Large language models can flag missed diagnoses in radiologist notes 

 Large language models can flag missed diagnoses in radiologist notes 


Photo: Prapass Pulsub/Getty Images 

A HIMSS25 session will show how a large language model (LLM) can monitor radiologist notes to help ensure patients are protected from medical errors and also get their recommended follow-up appointments.

Medical error, including a wrong or delayed diagnosis, is one of the top preventable causes of death in the United States. Delayed or missed opportunities for diagnoses (MOD) are particularly common in diagnostic imaging, where incidental findings require additional evaluation to complete assessment for a potential pathology. This is referred to as Delayed Imaging Surveillance, according to Parkland Health, a major safety-net public health system.

At Parkland, 1.7% of all CT and MRI studies involve such findings, according to the Dallas-based public health system.

“This session will offer the clear possibilities of this kind of program at a high-volume, safety-net healthcare setting where resources can be limited,” said Alex Treacher, Ph.D., senior data and applied scientist at the Parkland Center for Clinical Innovation (PCCI).

Treacher and others from Parkland are speaking at the HIMSS25 session “Creating a Large Language Model to Catalog Important Radiologist Recommendations” being held Wednesday, March 5, from 3:15 to 4:15 p.m. in the Venetian | Level 5 | Palazzo O in Las Vegas. 

Parkland researchers have developed a large language model that identifies and flags delayed surveillance recommendations from radiologists’ interpretations. The LLM has been integrated into Parkland’s electronic health record, enabling centralized management and navigation of these cases. Results demonstrate 95% accuracy in identifying imaging that requires follow-up based on physician notes and 85% accuracy in determining the appropriate timing for follow-up. 

“The large language model is outperforming manual review, which can be cumbersome, time-consuming and more error-prone, and we found the accuracy of 98.1% for the LLMs detection of follow-ups based on our experiment,” Treacher said.

“Creating a Large Language Model to Catalog Important Radiologist Recommendations” will be held Wednesday, March 5, from 3:15 to 4:15 p.m. in the Venetian | Level 5 | Palazzo O at HIMSS25 in Las Vegas. Speakers include Treacher; Albert Karam, vice president, Data Strategy and Analytics, PCCI; and Brett Moran, chief health officer at Parkland Health.

Email the writer: SMorse@himss.org



Source link

Fallon Wolken

Related post

Leave a Reply

Your email address will not be published. Required fields are marked *