VIDEO: Understanding biases in healthcare AI

 

Julius Bogdan, vice president and general manager of the Healthcare Information and Management Systems Society (HIMSS) Digital Health Advisory Team for North America, explains how to validate artificial intelligence (AI) algorithms being implemented across healthcare. 

Validation and testing of all algorithms is needed to eliminate any biases in the data used to train the AI. Bogdan also said it is important to test the algorithm using the health system's own data to check if its patients' data works with the AI. Examples of biases that might be unintentionally built into AI is if the dataset used to train it only includes while males over the age of 60 who live in more affluent areas. The AI needs to have a balance of ethnicities, races, sexes and socio-economic groups to improve the accuracy of the AI and ensure it can work with all types of patient populations. 

"The algorithm is just a mathematical equation and is not inherently bias, but what you need to understand on the provider or vendor sides is the dataset you are working with," Bogdan said. "What are the biases inherent in that dataset? You need to understand the statistics of your data, such as what percentage of your data is male/female, different ethnicities, and all the different characteristics of your data so you can understand what biases are inherent in that data. Then you can work to address those biases."  

As long as you understand the bias, he said programers can solve for them mathematically, discount these issues in the algorithm, or augment the data. Bogdan said you need to fix any biases detected because anything that is overlaid on that algorithm is going to be amplified.

"We are seeing this not only in healthcare but a ton of other industries," Bogdan said. "So addressing the data quality is really important."

For example, in human resources AI, Bogdan said it has been found some algorithm-based screening processes for job candidates were unintentionally bias in favor of whites because of the datasets used to train the algorithm were mostly white job applicants.

In healthcare, he explained it is important to validate if an AI algorithm is appropriate for a hospital or healthcare system's patient population.

"AI is not a one-size fits all solution in terms of the algorithm, and it is not just a one-time implementation. You need to have a life-cycle management plan in place for these algorithms, because as your data changes over time, you need to re-evaluate the outcome of the algorithm and adjust the algorithm as necessary," Bogdan said.

This is part of a 5-part series of interviews with Bogdan on various aspects of AI in healthcare. Here are the other videos in the series:

VIDEO: 9 key areas where AI is being implemented in healthcare

VIDEO: How hospital IT teams should manage implementation of AI algorithms

VIDEO: AI can help prevent clinician burnout

VIDEO: Use of AI to address health equity and health consumerization

Find more AI news and video

Dave Fornell is a digital editor with Cardiovascular Business and Radiology Business magazines. He has been covering healthcare for more than 16 years.

Dave Fornell has covered healthcare for more than 17 years, with a focus in cardiology and radiology. Fornell is a 5-time winner of a Jesse H. Neal Award, the most prestigious editorial honors in the field of specialized journalism. The wins included best technical content, best use of social media and best COVID-19 coverage. Fornell was also a three-time Neal finalist for best range of work by a single author. He produces more than 100 editorial videos each year, most of them interviews with key opinion leaders in medicine. He also writes technical articles, covers key trends, conducts video hospital site visits, and is very involved with social media. E-mail: dfornell@innovatehealthcare.com

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.