NEJM AI’s Post

View organization page for NEJM AI, graphic

10,431 followers

Screening participants in clinical trials is an error-prone and labor-intensive process that requires significant time and resources.     Large language models (LLMs) such as GPT-4 present an opportunity to enhance the screening process with advanced natural language processing.     A new study by Ozan Unlu, MD, et al. evaluates the utility of a Retrieval-Augmented Generation (RAG)–enabled GPT-4 system to improve the accuracy, efficiency, and reliability of screening for a trial involving patients with symptomatic heart failure.    The ongoing Co-Operative Program for Implementation of Optimal Therapy in Heart Failure (COPILOT-HF) trial identifies potential participants through electronic health record (EHR) queries followed by manual reviews by trained but nonlicensed study staff.     To determine patient eligibility for the COPILOT-HF study that is not identifiable by structured EHR queries, the authors developed RAG-Enabled Clinical Trial Infrastructure for Inclusion Exclusion Review (RECTIFIER), a clinical note–based, question-answering system powered by RAG and GPT-4.    RECTIFIER performed better than the study staff in determining symptomatic heart failure, with an accuracy of 97.9% versus 91.7% and an MCC of 0.924 versus 0.721, respectively. Overall, the sensitivity and specificity for determining patient eligibility with RECTIFIER were 92.3% and 93.9%, respectively, and 90.1% and 83.6% with the study staff.     With RECTIFIER, the single-question approach to determining eligibility resulted in an average cost of 11 cents per patient, and the combined-question approach resulted in an average cost of 2 cents per patient.    LLM–based solutions such as RECTIFIER can significantly enhance clinical trial screening performance and reduce costs by automating the screening process. However, integrating such technologies requires careful consideration of potential hazards and should include safeguards such as final clinician review.    Read the full study results by Ozan Unlu, MD, et al.: https://nejm.ai/4etSvvy    #ArtificialIntelligence #AIinMedicine   

  • Figure 2. Comparison of Positive Predictive Value (Precision) and Sensitivity (Recall) of RECTIFIER versus Study Staff.
James Barry, MD, MBA

Physician Leader | Neonatal Critical Care | Quality Improvement | Patient Safety | AI in Healthcare | Co-Founder NeoMIND-AI and Clinical Leaders Group

1mo

NEJM AI and Ozan Unlu, MD-- any comments about demographic/racial/ethnic bias with this RECTIFIER model? In our history of clinical trials lack of enrollment of a diverse study population has been (and remnains) a significant problem. Will this LLM solution improve or only worsen that problem?

Leslie Lenert, MD, MS, FACP, FACMI

Healthcare Data Ecosystems for research, population health and public health applications

1mo

While the data are great, have a look at the RAG workflow to understand the impact. It's not that complex but it is a pipeline approach to get to high precision identification of potential participants. I think almost any institution with HIPAA compliant GPT-4 access (ie, anyone on Azure) could replicate this.

  • No alternative text description for this image
Nasim Eftekhari

Exec. Director, Applied AI and Data Science at City of Hope

1mo
Leslie Lenert, MD, MS, FACP, FACMI

Healthcare Data Ecosystems for research, population health and public health applications

1mo

RAG seems to work in this application!

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics