As nations design regulatory frameworks for medical AI, research and pilot projects are urgently needed to harness AI as a tool to enhance today’s regulatory and ethical oversight processes. Under pressure to regulate AI, policy makers may think it expedient to repurpose existing regulatory institutions to tackle the novel challenges AI presents. However, the profusion of new AI applications in biomedicine — combined with the scope, scale, complexity, and pace of innovation — threatens to overwhelm human regulators, diminishing public trust and inviting backlash. An article by Barbara J. Evans, PhD, JD, LLM, and Azra Bihorac, MD, MS, FCCM, FASN, explores the challenge of protecting privacy while ensuring access to large, inclusive data resources to fuel safe, effective, and equitable medical AI. Informed consent for data use, as conceived in the 1970s, seems dead, and it cannot ensure strong privacy protection in today’s large-scale data environments. Informed consent has an ongoing role but must evolve to nurture privacy, equity, and trust. It is crucial to develop and test alternative solutions, including those using AI itself, to help human regulators oversee safe, ethical use of biomedical AI and give people a voice in co-creating privacy standards that might make them comfortable contributing their data. Biomedical AI demands AI-powered oversight processes that let ethicists and regulators hear directly and at scale from the public they are trying to protect. Nations are not yet investing in AI tools to enhance human oversight of AI. Without such investments, there is a rush toward a future in which AI assists everyone except regulators and bioethicists, leaving them behind. Read the Policy Corner article “Co-creating Consent for Data Use — AI-Powered Ethics for Biomedical AI” by Barbara J. Evans, PhD, JD, LLM, and Azra Bihorac, MD, MS, FCCM, FASN: https://nejm.ai/3KMxwqg #ArtificialIntelligence #AIinMedicine
NEJM AI’s Post
More Relevant Posts
-
📈+🤖AI HAS SCALED WAY FASTER THAN GENOMICS This puts in perspective how far behind the curve we are from thinking through the ethical implications of #AI. ⭐️Congrats to the team for this important paper. Azra Bihorac congrats on leading the charge for responsible AI at the University of Florida. #gogators!
As nations design regulatory frameworks for medical AI, research and pilot projects are urgently needed to harness AI as a tool to enhance today’s regulatory and ethical oversight processes. Under pressure to regulate AI, policy makers may think it expedient to repurpose existing regulatory institutions to tackle the novel challenges AI presents. However, the profusion of new AI applications in biomedicine — combined with the scope, scale, complexity, and pace of innovation — threatens to overwhelm human regulators, diminishing public trust and inviting backlash. An article by Barbara J. Evans, PhD, JD, LLM, and Azra Bihorac, MD, MS, FCCM, FASN, explores the challenge of protecting privacy while ensuring access to large, inclusive data resources to fuel safe, effective, and equitable medical AI. Informed consent for data use, as conceived in the 1970s, seems dead, and it cannot ensure strong privacy protection in today’s large-scale data environments. Informed consent has an ongoing role but must evolve to nurture privacy, equity, and trust. It is crucial to develop and test alternative solutions, including those using AI itself, to help human regulators oversee safe, ethical use of biomedical AI and give people a voice in co-creating privacy standards that might make them comfortable contributing their data. Biomedical AI demands AI-powered oversight processes that let ethicists and regulators hear directly and at scale from the public they are trying to protect. Nations are not yet investing in AI tools to enhance human oversight of AI. Without such investments, there is a rush toward a future in which AI assists everyone except regulators and bioethicists, leaving them behind. Read the Policy Corner article “Co-creating Consent for Data Use — AI-Powered Ethics for Biomedical AI” by Barbara J. Evans, PhD, JD, LLM, and Azra Bihorac, MD, MS, FCCM, FASN: https://nejm.ai/3KMxwqg #ArtificialIntelligence #AIinMedicine
To view or add a comment, sign in
-
-
In the 2024 report "Ethics and governance of artificial intelligence for health Guidance on large multi-modal models", World Health Organization assists Member States in mapping the benefits and challenges associated with the use of Large Multi-Modal Models for health and in developing policies and practices for appropriate development, provision and use. These principles aim to mitigate risks and maximize benefits across various applications of LMMs in healthcare, such as diagnosis assistance, medical education, and scientific research. Multiple challenges like bias, cybersecurity, and societal impacts must be addressed. The governance framework proposed in the report spans the AI value chain—from development and provision to deployment—emphasizing the roles of developers, providers, and governments in managing risks and upholding ethical standards. This report builds on 2021's guidance on the ethics and governance of AI in healthcare, reflecting insights from several AI experts, which outlined six consensus principles to guide governments, developers, and providers using AI in healthcare.
To view or add a comment, sign in
-
-
With the Specialist combines medical expertise with technological innovation. Trust in a solution created by doctors, for doctors, and revolutionize your approach to neurological care.
The World Health Organization (WHO) is issuing new guidance on the ethical use and governance of Large Multi-Modal Models (LMMs), a rapidly growing type of generative artificial intelligence (AI) with applications in healthcare. The guidance offers over 40 recommendations for governments, technology companies, and healthcare providers to ensure responsible LMM use, addressing potential benefits and risks. LMMs, capable of processing diverse data inputs like text, videos, and images, mimic human communication and perform tasks not explicitly programmed. Despite their potential to enhance healthcare, the WHO emphasizes the need for transparent information and policies to manage LMM design, development, and use. The guidance identifies five broad applications of LMMs in health, including diagnosis, patient-guided use, clerical tasks, medical education, and scientific research. Risks such as producing inaccurate or biased information, as well as broader issues like accessibility and affordability, are outlined. Key recommendations involve stakeholder engagement in all stages of LMM development and deployment, emphasizing governments' role in setting ethical standards, investing in infrastructure, and enforcing regulations. Developers are urged to involve diverse stakeholders in design and ensure LMMs perform well-defined tasks accurately and reliably. The WHO emphasizes the importance of collaborative efforts to regulate AI technologies, ensuring their safe and ethical use in healthcare. #aiinhealthcare #worldhealthorganisation #aiethics With the Specialist combines artificial intelligence with a board certified neurologist to enhance neurological care in the primary care setting. #withthespecialist #neurology https://lnkd.in/gdriTGvj https://lnkd.in/gExrwg2P
To view or add a comment, sign in
-
Scaling Growth & Operations Expert | Patient Care Quality & Experience Innovator | Leading through Faith
Large Language Models (LLMs) can offer a variety of advantages in the healthcare industry, from enhanced data analysis to decision support. However, they also pose many concerns regarding their ethical implications, such as the distribution of inaccurate medical data, privacy breaches, and bias. The reality is that integrating AI into healthcare is an advantageous but incredibly complex task. To be successful, we must improve transparency, ensure equity, and clearly define ethical guidelines. #AI #Healthcare #AIEthics
Researchers call for ethical guidance on use of AI in healthcare
news-medical.net
To view or add a comment, sign in
-
Co-Founder, Chief Operating Officer at Censia | Talent Intelligence | Data Aggregation | AI | Ethical Data | Mentorship
The integration of AI in healthcare poses a series of ethical questions - data privacy, informed consent, and algorithmic bias - which similarly resonate within the AI recruitment space. For instance, it brings up the question of whether we are protecting the confidentiality and autonomy of candidate data like the stringent practices we advocate for inpatient care. Whether it’s about recruitment, healthcare, or any other industry, the success of AI hinges not on the sophistication of the technology itself, but on our commitment to transparent, ethical, and equitable practices. #AI #ethics #dataprivacy
Ethical Considerations in AI-Driven Healthcare
news-medical.net
To view or add a comment, sign in
-
The potential of AI in healthcare seems limitless, from enhancing diagnostics to personalizing treatment plans. However, with great innovation comes great responsibility. AI's integration comes with ethical challenges, including securing patient data, addressing algorithmic bias, ensuring informed decision-making, and developing comprehensive ethical frameworks. What do you think are the most pressing ethical issues in AI-driven healthcare? #AIinhealthcare #ethicalconsiderations
Ethical Considerations in AI-Driven Healthcare
news-medical.net
To view or add a comment, sign in
-
Health, AI and the Law | Assistant Dean & Assistant Professor of Law, HBKU Law | Adjunct Assistant Professor of Medical Ethics in Clinical Medicine, Weill Cornell Medicine - Qatar
The AI Act will fail to build trust in healthcare 👩⚕️ What's the solution?🤺 I'm sharing a pre-print of my new article accepted in the Journal of Law, Innovation and Technology for 2025 📑 I argue that The EU talks a big game🏏 about trust and trustworthiness in AI, but it fails to deliver for healthcare: 1️⃣ Trust is barely mentioned in the provisions of the Act 2️⃣ The AI Act creates a risk classification system for regulating AI that doesn't address trust in healthcare 3️⃣ The AI Act is built on top of already flawed regulatory systems that lack trust The solution? Not a fantastic one, but at least it's something: 🌌 The AI Act creates space for voluntary best practices and standards 💡 I propose creating an AI Bill of Rights for healthcare in that space An AI Bill of Rights for healthcare📕could help set standards on: 🔷 Informed consent and explainability 🔷 Data biases, privacy, and security 🔷 Accountability for AI decisions 🔷 AI accuracy Setting standards in these areas would be a step toward building trust in AI in healthcare. I am eager for any feedback on this paper. I will update it before it goes into final print next year. Feel free to comment, message, or email me 💬 I am very grateful to Professors Anniek de Ruijter, Barbara Prainsack, and Tamara Hervey for their time and feedback in guiding an earlier draft of this paper. #ai #trust #trustworthiness #aiact #aia #EU #health #healthcare #medicine #medical #billofrights #law #ethics #bioethics #risk #patients #explainability #informedconsent #conset #data #dataprotection #dataprivacy #datasecurity #accountability #liability #accuracy #blackbox Taylor & Francis Group Center for Open Science Hamad Bin Khalifa University https://lnkd.in/dM3RpvbC
The European Union’s Artificial Intelligence Act and Trust: Towards an AI Bill of Rights in Healthcare?
osf.io
To view or add a comment, sign in
-
With the Specialist combines medical expertise with technological innovation. Trust in a solution created by doctors, for doctors, and revolutionize your approach to neurological care.
The integration of artificial intelligence (AI) in healthcare demands a strong commitment to ethical governance. As AI transforms patient care and administrative efficiency, there's a need for balance between its benefits and ethical development. Despite the potential paradigm shift in healthcare, the rapid evolution of AI raises complex ethical dilemmas, emphasizing the importance of responsible AI development. AI's multifaceted role in healthcare includes diagnostic precision, tailored treatment plans, improved financial experiences, and better patient outcomes. However, the lack of trust in AI technologies within healthcare settings, with over 60% of patients expressing skepticism, underscores concerns about data privacy, biases, and transparency. To address these challenges, the Responsible AI Institute introduces the RAISE Benchmarks, evaluating corporate AI policies, addressing AI hallucinations, and ensuring alignment across the supply chain. Navigating regulatory frameworks, aligning with standards, educating professionals, and fostering leadership commitment to ethical AI principles are essential for building trust and accountability in the evolving era of AI in healthcare. The RAISE Benchmarks provide a practical framework to balance benefits, mitigate risks, and build trust in the ethical integration of AI in healthcare. #aiinhealthcare #aiethics With the Specialist combines artificial intelligence with a board certified neurologist to enhance neurological care in the primary care setting. It’s a tool for primary care providers to help their neurological patients. #withthespecialist #neurology https://lnkd.in/gtjEdC3v https://lnkd.in/gExrwg2P
Author Post: Ethical AI in Healthcare: A Focus on Responsibility, Trust, and Safety
forbes.com
To view or add a comment, sign in
-
AI Systems, Prompt Design and Engineering Expert | Partner at Forest Hill Labs | Enhancing Healthcare Technology through Prompt Engineering | AI Strategy Consultant | Google Cloud Innovator | Google Trusted Tester
The Imperative of Ethical AI in Healthcare: Navigating the Future Together In the realm of healthcare, the intersection of artificial intelligence and ethical considerations is not just a matter of compliance—it’s a foundational pillar for the future of medical innovation and patient care. As we delve into the intricate dance of technology and ethics, it becomes clear that the development and deployment of AI technologies must be guided by more than just the potential for advancement. They must be anchored in the bedrock of ethical responsibility. Today, we present a unique visual exploration of how AI applications across diagnosis, treatment, monitoring, discovery, and records management align with the ethical considerations that are paramount to their success and societal acceptance. As we navigate this landscape, it becomes evident that the path forward is not one of avoidance but engagement. Engaging with these ethical dimensions allows us to craft AI systems that not only enhance patient outcomes but do so while safeguarding the fundamental rights and dignities of all involved. It’s about creating a future where technology and humanity converge for the greater good, guided by ethical frameworks that mirror the societal norms which have long steered human behavior. This journey is not one we take alone. It requires the collective effort of developers, policymakers, healthcare professionals, and society at large. I invite you to join this critical conversation, to share your insights and to help shape the future of ethical AI in healthcare. It’s a journey that promises not just to redefine our technological horizons but to reaffirm our commitment to the values that define us as a society. 🔗 Dive Deeper into the Conversation: Join Us: https://lnkd.in/gvGkGZqY #EthicalAI #HealthcareInnovation #AIinHealthcare #FutureofHealthcare #TechnologyEthics Forest Hill Labs: A Health Market Intelligence, Policy, and Thought Leadership Consultancy Robert Horne Dylan Reid(Moskowitz)
To view or add a comment, sign in
-
Clinical Studies Team Lead at Ada Health | Physician
1moStephen Gilbert