Harnessing the ‘Once-in-a-Generation’ Advantages of Large-Scale Models for Secure AI in Healthcare

Harnessing the ‘Once-in-a-Generation’ Advantages of Large-Scale Models for Secure AI in Healthcare

Join our daily and weekly newsletters to stay updated with exclusive AI industry content.

After Google, OpenAI, and 13 other AI companies, major healthcare entities have agreed to sign the Biden-Harris Administration’s voluntary commitments for the safe and reliable development of artificial intelligence. Announced on December 14, these commitments outline steps to harness large-scale AI models’ potential benefits in healthcare while managing their risks and safeguarding patients’ confidential information.

In total, 28 organizations involved in healthcare, including providers and payers who use AI technologies, have signed these commitments. Some of the notable entities are CVS Health, Stanford Health, Boston Children’s Hospital, UC San Diego Health, UC Davis Health, and WellSpan Health.

These commitments aim to leverage breakthrough AI models for transformative healthcare improvements. Prior to the advent of ChatGPT and other generative AI, AI’s role in healthcare, such as early disease diagnosis and discovering new treatments, was already under discussion. However, concerns about AI systems’ safety and reliability in healthcare persist. A GE Healthcare survey revealed that 55% of clinicians felt AI technology is not yet ready for medical use, and 58% did not trust AI data. Skepticism was even higher among clinicians with over 16 years of experience, at 67%.

With these voluntary commitments, the 28 healthcare organizations aim to alleviate skepticism and develop AI for better coordinated care, enhanced patient experiences, and reduced clinician burnout. They believe AI offers a unique opportunity to accelerate healthcare system improvements, especially in early cancer detection and prevention, as emphasized in the Biden Administration’s call to action.

To build trust among clinicians and healthcare workers, the organizations have pledged to align their AI projects with the fair, appropriate, valid, effective, and safe (FAVES) AI principles outlined by the U.S. Department of Health and Human Services (HHS). This alignment ensures their solutions are effective and unbiased in real-world scenarios. They also plan to enhance transparency by informing users if the content is AI-generated and has not been human-reviewed. Additionally, they will implement a risk management framework that includes tracking AI-powered applications and accounting for potential risks and mitigation steps.

The organizations will establish policies and controls for AI model applications, including data acquisition and management. Their governance practices will maintain a list of all applications using frontier models and set an effective risk management framework, defining roles and responsibilities for approving AI use.

In addition to focusing on existing implementations, these organizations will continue R&D on health-centric AI innovations with appropriate safeguards. They plan to use non-production environments and test data for prototyping and ensuring privacy compliance. Continuous monitoring will ensure fair and accurate application responses, supported by human oversight or dedicated AI evaluation tools.

Lastly, the companies will address issues associated with open-source technology and train their workforce on the safe and effective development and use of AI applications.

Stay informed with the latest news by subscribing to our daily newsletter.