Working group debuts standard for AI in healthcare trustworthiness

Several issues with AI, such as the black-box problem, can “make people suspicious” of the technology when applied to healthcare, according to the Consumer Technology Association (CTA), which has issued a standard to identify core requirements and baseline for AI solutions to be deemed trustworthy.

“AI is providing solutionsfrom diagnosing diseases to advanced remote care optionsfor some of healthcare’s most pressing challenges,” Gary Shapiro, president and CEO of CTA, says in a statement. “As the U.S. healthcare system faces clinician shortages, chronic conditions and a deadly pandemic, it’s critical patients and health care professionals trust how these tools are developed and their intended uses.”

The standard covers three areas of trustworthiness: human trust, technical trust and regulatory trust. The first deals with the trust between the AI developers and users, according to the standard. Developers should be sure to make clear what the system can and cannot do, how well the system can perform and put the system’s abilities into context.

Technical trust relates to how AI systems work––that bias is minimized, data is secure and private, HIPAA requirements are followed, and source data used to train AI systems is good data. Regulatory trust encompasses topics of interest to regulators, from data privacy laws, the FDA and the FTC regulations, state medical boards and other laws. 

The standard outlines numerous requirements for model developers to follow in order to meet trustworthiness on each level. It was created by more than 60 organizations convened by CTA and is ANSI accredited. CTA created an initiative on AI in healthcare, and membership in its Artificial Intelligence in Health Care working group has doubled in size to 64 organizations and member companies over the last two years. The standard represents the second in a series focused on implementing medical and healthcare solutions built on AI, according to a press release. 

“Establishing these pillars of trust represents a step forward in the use of AI in healthcare,” Pat Baird, regulatory head of global software standards at Philips and co-chair of the working group, says in a statement. “AI can help caregivers spend less time with computers and more time with patients. In order to get there, we realized that different approaches are needed to gain the trust of different populations and AI-enabled solutions need to benefit our customers, patients and society as a whole.”

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Around the web

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”