Large language models (LLMs) have shown significant promise related to their application in medical research, medical education, and clinical tasks. While acknowledging their capabilities, we face the challenge of striking a balance between defining and holding ethical boundaries and driving innovation in LLM technology for medicine. The authors of a new review herein propose a framework, grounded in four bioethical principles, to promote the responsible use of LLMs. This model requires the responsible application of LLMs by three parties — the patient, the clinician, and the systems that govern the LLM itself — and suggests potential approaches to mitigating the risks of LLMs in medicine. This approach allows us to use LLMs ethically, equitably, and effectively in medicine. Read the Review Article “Medical Ethics of Large Language Models in Medicine” by Jasmine Chiat Ling Ong, PharmD, et al.: https://nejm.ai/3VL2vcJ #ArtificialIntelligence #AIinMedicine
So glad to see we are looking at #ethical principles and frameworks around #AI tool innovations. A must read paper, thanks NEJM AI .
This is very informative and thorough in explaining what ambient AI can do in patient care.
|ss|
Very informative
NEJM AI 💯🌎
Nice review Daniel Ting and team! #ResponsibleAI
Very helpful!
Well said!
Bringing hope to our healthcare community | Empowering all with trust to co-create yielding "ripple effect" benefits | Turning setbacks into opportunities | Serving unmet needs | Patients/HCPs advocate | MD, MPH
1wVery sound and valid points-- even before asking the right questions to use LLMs: We have to consider the privacy of Patients = owners of their own medical records; clinicians = guardians of patients' medical records AND the systems that govern LLMs =being compliant, secure ensuring privacy even with de-identified patients records. Thank you NEJM AI for ensuring such ethical considerations and a framework allowing the use of LLMs equitably, ethically and effectively.