July 1, 2024
ChatGPT's Precision in Analyzing Medical Charts

ChatGPT’s Precision in Analyzing Medical Charts: A Game-Changer for Clinical Research and Decision-Making

UT Southwestern Medical Center researchers have discovered that ChatGPT, an artificial intelligence (AI) chatbot, can accurately extract essential data from physicians’ clinical notes for research purposes with a high degree of accuracy. This groundbreaking study, published in npj Digital Medicine, could revolutionize clinical research and lead to advanced computerized clinical decision-making aids.

The researchers, led by Dr. Yang Xie, Ph.D., Professor in the Peter O’Donnell Jr. School of Public Health and the Lyda Hill Department of Bioinformatics at UT Southwestern, found that ChatGPT could transform unstructured healthcare data from clinical notes into valuable insights. This development paves the way for AI to derive meaningful information, improve clinical decision-making, and ultimately enhance patient outcomes.

Dr. Xie, who is also the Associate Dean of Data Sciences at UT Southwestern Medical School, Director of the Quantitative Biomedical Research Center, and a member of the Harold C. Simmons Comprehensive Cancer Center, explained that clinical notes are a rich source of information but are typically written in free text. Extracting structured data from these notes usually requires a trained medical professional to read and annotate them, which is a time-consuming and resource-intensive process that can introduce human bias.

To explore whether ChatGPT could convert clinical notes to structured data, Dr. Xie and her team had it analyze over 700 sets of pathology notes for lung cancer patients to identify the major features of primary tumors, lymph node involvement, and cancer stage and subtype. The results showed that ChatGPT achieved an average accuracy of 89% in making these determinations, as evaluated by human readers. This analysis took several weeks of full-time work compared to the few days it took to fine-tune data extraction from the ChatGPT model. This accuracy was significantly better than other traditional natural language processing methods tested for this use.

To test the applicability of this approach to other diseases, Dr. Xie and her colleagues used ChatGPT to extract information about cancer grade and margin status from 191 clinical notes on patients from Children’s Health with osteosarcoma. ChatGPT returned information with nearly 99% accuracy on grade and 100% accuracy on margin status.

Dr. Xie emphasized that the results were heavily influenced by the prompts given to ChatGPT to perform each task, a phenomenon known as prompt engineering. Providing multiple options to choose from, giving examples of appropriate responses, and directing ChatGPT to rely on evidence to draw conclusions improved its performance.

Using ChatGPT or other large language models to extract structured data from clinical notes could not only accelerate clinical research but also aid clinical trial enrollment by matching patients’ information to clinical trial protocols. However, Dr. Xie stressed that ChatGPT would not replace the need for human physicians.

This technology is an extremely promising way to save time and effort, but we should always use it with caution,” Dr. Xie said. “Rigorous and continuous evaluation is essential.

*Note:
1. Source: Coherent Market Insights, Public Source, Desk Research
2. We have leveraged AI tools to mine information and compile it