3 ways to solve the bias problem in AI

As AI systems leave laboratories and are implemented in real world settings, bias will continue to be “an increasingly widespread problem.” So how can it be solved? 

Researchers at IBM are currently working on automated bias detection algorithms to combat the problem, but the solution may not just be AI itself. The problem is likely deeper, according to a report published in Forbes. Societal bias may the actual problem.

Across the healthcare space, bias is a well-documented issue in medicine. 

Artificial Intelligence Lead at Accenture, Rumman Chowdhury, PhD, noted societal bias could still put a wrench in situations where data and algorithms are clean. She listed three specific steps organizations can implement to minimize the impact of societal biases.

  1. Look at the algorithms and ensure that they are not coded in a way that extends bias.
  2. Look at if AI can help alleviate the risk of biased data—similar to what IBM is trying to accomplish.
  3. Regulate AI and design the proper parameters for AI to operate within. Teach algorithms what data is valid to learn from that is valuable and ethical. 

To read the story, click the link below.

""

As a senior news writer for TriMed, Subrata covers cardiology, clinical innovation and healthcare business. She has a master’s degree in communication management and 12 years of experience in journalism and public relations.

Around the web

The American College of Cardiology has sent a letter to HHS Secretary Robert F. Kennedy Jr. that outlines some of the organization’s central priorities and concerns. 

One product is being pulled from the market, and the other is receiving updated instructions for use.

If the Trump administration continues taking a laissez-faire stance toward AI—including AI used in healthcare—why not let the states go it alone on regulating the technology?