Instant recap: Top healthcare AI stories of the past few weeks

Instant recap: Top healthcare AI stories of the past few weeks

Click here to receive AI in Healthcare newsletters in your email and stay informed about healthcare's biggest stories.


From the editor 

Is generative AI more hype than substance?  

No. It’s about equal parts of each. This is only an opinion, but it’s surely a defensible one. On one hand, AI of any stripe is not going to take over the world. On the other, it’s already changing our world. In healthcare specifically, GenAI can help diagnose disease in patients who haven’t even seen a doctor. And for evidence of its potential to complete even tricky administrative tasks, look no further than the class actions against Humana, Cigna and UnitedHealth. All stand accused of using algorithms to concoct plausible but false reasons for denying claims. And if AI were mostly hype, would 85% of healthcare leaders across 14 countries commit big portions of their respective budgets to it? OK, don’t answer that. Until further notice, let it stand as a rhetorical question—one submitted to provoke thought, not to end the conversation.  

Dave Pearson

Editor, AIin.Healthcare


Knowledge workers and the orgs that employ them tend to advance toward full AI adoption in five distinct stages. In order, these are skepticism, activation, experimentation, scaling and maturity. More than half of knowledge workers are already using gen AI to do their jobs, so it’s important to understand these steps. Toward that end, market researchers surveyed more than 5,000 members of the knowledge workforce. The team shaped the responses into a series of questions companies can ask themselves so they can proceed purposefully. Examples: “What issues are top-of-mind for employees regarding AI?” “How do employees ‘collaborate’ with AI?” “How is AI effectiveness and value measured in your organization?” Read AIin.Healthcare’s summary coverage: 5 questions to guide the AI voyage from skepticism to maturity across the enterprise 

It’s likely to be quite a while before state and federal governments get around to governing AI in any meaningful way. In the meantime, hospitals and health systems would do well to govern AI themselves. At a baseline these efforts in “self-regulation” should focus on making sure AI investments are strategically deployed, risks are weighed against benefits and opportunities are rigorously explored. The tips come from UC-Davis Health in partnership with the healthcare division of Manatt, a law and professional-services firm based in Los Angeles. Drilling down into specifics, the duo lists as crucial to-do’s developing a prioritization process, bringing the right experts to the table, inventorying AI tools already in use and taking a “user-centered design approach.” More: 5 first steps toward do-it-yourself AI governance 

An eye-opening 85% of healthcare leaders across 14 far-flung countries are already investing (29%) or plan to invest (56%) in generative AI within the next three years. However, significant cross-country differences persist around how quickly healthcare leaders plan to invest in generative AI. And the variances are consistent with overall differences in speed of adoption of AI for clinical decision support. Philips researchers made the findings while compiling data for the company’s latest Future Health Index report. Among their key conclusions: “Healthcare organizations have a wealth of data but a poverty of insights.” Respondents represented not only the usual participants in the English-speaking world but also Brazil, China, India, Indonesia, Italy, Japan, the Netherlands, Poland, Saudi Arabia and Singapore. More: Healthcare leaders worldwide counting on AI to close ‘critical gaps’ in patient care  

To mitigate risk over time, conduct life-cycle planning for all AI models you plan to use. So advises the FDA’s Digital Health Center of Excellence. In a June 17 blog post, DHCE director Troy Tazbaz suggests that taking the long view can help make sure data suitability, collection and quality “match the intent and risk profile of the AI model” from conception to, presumably, replacement. Tazbaz further notes that quality-assurance measures can positively impact clinical outcomes, and shared responsibility can help ensure success. “These efforts, combined with FDA activities relating to AI-enabled devices,” he maintains, “may lead to a world in which AI in healthcare settings is safe, clinically useful and aligned with patient safety and improvement in clinical outcomes.” More: FDA official: Let’s work together to make healthcare AI work for everyone 

What do you get when you bring together patient advocates, technology developers, clinicians and data scientists? If you bring them together to hammer out a detailed framework on the responsible use of healthcare AI, you might get the CHAI Assurance Standards Guide. CHAI stands for the Coalition for Health AI. The nonprofit group released a first draft of the guide June 26 and has opened it for public comment and refinement. The purpose of the framework—which includes companion checklists of stakeholder to-do’s—is to offer “actionable guidance on ethics and quality assurance” for everyone involved in designing, developing, using and/or monitoring AI in healthcare. More: Coalition for Health AI publishes stakeholder guide, proposes 6-stage AI lifecycle

Improve population health. Reduce healthcare costs. Enhance the patient experience. These imperatives comprise the Triple Aim, proposed by the Institute for Healthcare Improvement in 2008 and professed by many if not most healthcare leaders ever since. In 2014 a fourth must-do item gained traction: Maximize job satisfaction for healthcare workers. Today the Quadruple Aim stands as a de facto mission statement for provider orgs of all shapes and sizes. Which raises a newly pressing question. How might generative AI affect U.S. healthcare’s pursuit of its Big Aim by whatever name? Top minds at Microsoft’s AI for Good Lab take up the inquiry in a paper published June 18 by Frontiers in Artificial Intelligence. More: Elusive quadruple aim revisited for the generative AI era 

Transparency is not a nicety in AI-enabled medical devices. It’s an essential aspect. It’s also eminently achievable, according to new guidance jointly promoted by the FDA, Health Canada and the U.K.’s Medicines and Healthcare products Regulatory Agency. In a document published June 13, the agencies defend their position on transparency’s indispensability from various angles. For starters, they write, transparency “builds fluency and efficiency in the use of MLMDs.” Moreover, transparency “can foster trust and confidence in the technology. It encourages adoption of and access to beneficial technologies.” More: How to attain and sustain transparency in medical devices outfitted with AI 

The American College of Physicians believes that the development, testing and use of AI in healthcare must be aligned with principles of medical ethics. One might think this should go without saying, but ACP sees the value in codifying the basics so they don’t get lost en route between the present and the future. The group itemized and expounded on its AI views in a paper published by its flagship journal, Annals of Internal Medicine. Another important ACP conviction: Maintaining the patient–physician relationship requires care. “AI should be implemented in ways that do not harm or interfere with this relationship,” the group writes, “but instead enhance and promote the therapeutic alliance between patient and physician.” More: 5 views on AI in healthcare from the American College of Physicians 

Healthcare AI startups are strong where they’re nimble. That’s a hot takeaway from market researchers at Silicon Valley Bank. In a recent report, the team says this attribute is a differentiator because massive adoption and margins are not as crucial for startups as they are for established big tech. Meanwhile “the bottom-up nature of startups makes them suitable for working closely with physicians.” The bank points out that the past five years have been boom times for AI startups courting venture investors in healthcare. This year alone is expected to see more than $11 billion of venture capital invested in healthcare AI—up from $7.2 billion in 2023. And one of every four dollars chasing ROI in the sector goes to companies with AI in their wheelhouse. More: US healthcare is flush with venture investments in AI 

--

Want more? When you finish with our top stories linked above, catch up with quick news bites from the past month or so in AIin.Healthcare’s Industry Watcher’s Digest


Click here to receive AI in Healthcare newsletters in your email and stay informed about healthcare's biggest stories.

Mana Sidhu

ASO Strategist | Mobile App Marketing | Enhancing App Rankings and User Engagement | Influencer Marketing | App Store Optimization

2w

Gain insights and stay updated about the latest AI certifications and trends. Join us on LinkedIn https://www.linkedin.com/groups/12966422

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics