5 ways US regulators could go further to beat back bias in medical AI

Not every piece of AI-bearing software on the healthcare market is subject to FDA approval. But the agency could do more to police regulated products for algorithmic biases that may affect clinical outcomes in vulnerable subpopulations.

Or, as put by the Pew organization in commentary posted Oct. 7:

Despite a growing awareness of these risks, FDA’s review of AI products does not explicitly consider health equity. The agency could do so under its current authorities and has a range of regulatory tools to help ensure AI-enabled products do not reproduce or amplify existing inequalities within the healthcare system.”

From there Pew healthcare products director Liz Richardson lays out steps the agency could take to correct course.

Richardson says the FDA can and should:

1. Help mitigate the risks of bias by routinely analyzing the data submitted by AI software developers. Such analyses should include checking for inclusion by demographic subgroups, including those aligned by sex, age, race and ethnicity.

“This would help gauge how the product performed in those populations and whether there were differences in effectiveness or safety based on these characteristics,” Richardson writes.

2. Choose to reject a product’s application if the agency determined, based on the subgroup analysis, that the risks of approval outweighed the benefits. FDA currently encourages but cannot require software developers to submit subpopulation-specific data as part of their device applications, Richardson points out. Five years ago the agency released guidance on gathering and submitting this kind of data. It also told how best to report disparities in subgroup outcomes.

“However, it is not clear how often this data is submitted,” Richardson comments, “and public disclosure of this information remains limited.”

3. Require healthcare AI developers to note omissions of diversity in algorithm training, testing and validation. Clear labeling about potential disparities in product performance might help to promote health equity, Richardson suggests.

“This would alert potential users that the product could be inaccurate for some patient populations and may lead to disparities in care or outcomes,” she writes. “Providers can then take steps to mitigate that risk or avoid using the product.”

4. Develop guidance to set up product-review divisions for considering health equity as part of the analyses of AI-enabled devices. There is precedent for this, Richardson notes.

“In June, FDA’s Office of Women’s Health and the Office of Minority Health and Health Equity (OMHHE) took an encouraging step by launching the Enhance Equity Initiative, which aims to improve diversity in the clinical data that the agency uses to inform its decisions and to incorporate a broader range of voices in the regulatory process.”

5. Diversify the range of voices at the table to support more equitable policies through FDA’s ongoing Patient Science and Engagement Initiative. The Center for Devices and Radiological Health, which is responsible for approving medical devices, can spearhead this kind of effort, Richardson suggests.

“Following an October 2020 public meeting on AI and machine learning in medical devices, FDA published an updated action plan that emphasized the need for increased transparency and building public trust for these products,” Richardson writes. “The agency committed to considering patient and stakeholder input as it works to advance AI oversight, adding that continued public engagement is crucial for the success of such products.”

Richardson’s closing argument:

AI can help patients from historically underserved populations by lowering costs and increasing efficiency in an overburdened health system. But the potential for bias must be considered when developing and reviewing these devices to ensure that the opposite does not occur. By analyzing subpopulation-specific data, calling out potential disparities on product labels and pushing internally for the prioritization of equity in its review process, FDA can prevent potentially biased products from entering the market and help ensure that all patients receive the high-quality care they deserve.”

Read the whole thing.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.