MIT researchers work to ‘debias’ AI

Researchers with MIT’s Computer Science and Artificial Intelligence Lab are working to develop an algorithm that can automatically “debias” data for AI models—an issue that has plagued the technology amid its growing prevalence in the medical field.

“The development and deployment of fair and unbiased AI systems is crucial to prevent unintended discrimination and to ensure the long-term acceptance of these algorithms,” Alexander Amini, MIT doctoral student, et al. wrote in a paper. “We envision that the proposed approach will serve as an additional tool to promote systematic, algorithmic fairness of modern AI systems.”

The team’s ongoing work was recently presented during the Conference on Artificial Intelligence, Ethics and Society in Hawaii last month. According to the paper, the team’s deep-learning algorithm can simultaneously learn desired tasks and underlying latent structure of the training data––meaning the algorithm can look at a dataset, learn what’s hidden inside it and automatically resample it to be more fair without using a programmer.

Learning the latent structure allows researchers to uncover hidden or implicit biases within the training data, the authors explained.

“Our algorithm, which is built on top of a variational autoencoder (VAE), is capable of identifying underrepresented examples in the training dataset and subsequently increases the probability at which the learning algorithm samples these data points,” Amini et al. wrote.

When the algorithm was applied to facial detection dataset, the model showed increased classification accuracy and decreased categorical bias across race and gender. It was able to decrease categorical bias by more than 60 percent.

“Facial classification in particular is a technology that’s often seen as ‘solved,’ even as it’s become clear that the datasets being used often aren’t properly vetted,” Amini said in a prepared statement. “Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains.”

With the recent surge in AI-based products, ensuring AI algorithms don’t perpetuate clinical biases has been widely discussed across the industry. Several entities have made efforts to address the issue, including the American Medical Association and the University of Guelph in Ontario, Canada.

""

Danielle covers Clinical Innovation & Technology as a senior news writer for TriMed Media. Previously, she worked as a news reporter in northeast Missouri and earned a journalism degree from the University of Illinois at Urbana-Champaign. She's also a huge fan of the Chicago Cubs, Bears and Bulls. 

Around the web

If passed, this bill would help clinician-led clinical registries explore Medicare data for research purposes. The Society of Thoracic Surgeons and American College of Cardiology both shared public support for the bipartisan legislation. 

Cardiologists and other physicians may soon need to provide much more information when ordering remote patient monitoring for Medicare patients.

Why are so many cardiovascular devices involved in Class I recalls? One possible reason could be the large number of devices hitting the market without undergoing much premarket clinical testing. 

Trimed Popup
Trimed Popup