AI needs more diversity to avoid data bias
AI has the potential to disrupt the healthcare industry and improve healthcare outcomes of patients through faster diagnosis and more accurate, targeted treatment. But how AI algorithms are trained needs some improvement, according to Naga Rayapati, founder and CEO/CTO at online marketplace GoGetter, which penned an article for Forbes.
Namely, the data that AI systems are trained with needs to be more diverse to avoid bias in the algorithms. In healthcare, bias can inadvertently harm patients through discrimination.
“AI companies have a moral obligation to their customers, and to themselves, to actively address data bias,” Rayapati wrote.
Not addressing bias in the AI space could have detrimental impacts, including the possible rejection of the technology or sub-par products. In addition, bias could have legal implications in the future, according to Rayapati.
While the machine learning systems aren’t biased themselves, the data used to create algorithms can have built-in bias. For example, an AI system used in assisting sentencing guidelines recommended stricter guidelines disproportionately for minorities, Rayapati wrote. To ensure AI data is unbiased, the issue must be dealt with when data is collected or curated. Above all, the data must be diverse.
See the full story below: