AI has a ‘diversity disaster’: 5 things to know

The field of AI is currently at the center of a far-reaching diversity crisis, according to a report issued this April.

The report, authored by three women at the AI Now Institute at New York University, is the culmination of a year’s worth of research into the scale of AI’s “diversity disaster.” Sarah Myers West and her co-authors said their yearlong pilot study drew on current research and literature reviews to examine the intersection of sex, race and power in the field.

“Given decades of concern and investment to redress the imbalances, the current state of the field is alarming,” the authors wrote. “The diversity problem is not just about women. It’s about gender, race and, most fundamentally, about power. It affects how AI companies work, what products get built, who they are designed to serve and who benefits from their development.”

This is what Myers and colleagues found out about the state of AI in 2019:

1. The numbers aren’t adding up.

The authors said recent studies have found just 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men. Even at reputable companies like Facebook and Google, women comprise just 15% and 10% of the AI research staff, respectively. Black researchers are also underrepresented, making up just 2.5% of Google’s workforce and 4% of Facebook and Microsoft’s, and there aren't any public data on transgender workers or other gender minorities.

Myers and co-authors suggested firms publicly publish their compensation levels, including bonuses and equity, across all roles, races and genders, and end pay and opportunity inequality. They also recommended companies publish any harassment and discrimination transparency reports in detail.

2. AI innovators need to admit there’s a problem.

Myers et al. said the AI sector needs a “profound shift in how it addresses the current diversity crisis,” since research has shown that bias in AI systems reflects historical patterns of discrimination and the field has failed to acknowledge the gravity of its issue. The authors suggested AI leaders admit that existing methods to diversify have failed so they can move forward with more feasible solutions.

“The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context,” they wrote. “This necessitates including a wider range of disciplinary expertise.”

3. Focusing on women alone won’t solve the issue.

The authors said the majority of AI studies—like many medical studies—still assume gender is binary and assume individuals identify as male or female based on their physical appearance and stereotypes. That in itself disregards other gender identities, and even if the field does focus on growing the number of women in the tech industry, those solutions will likely prioritize white women.

Myers and colleagues said it’s important to start increasing the number of people of color, women and other underrepresented groups at senior leadership levels in all departments across an AI company.

4. Efforts to fix the AI “pipeline” haven’t succeeded, and they likely won’t.

Myers et al. referenced “decades” of pipeline studies that have assessed the flow of diverse job candidates from school to industry, but there hasn’t been any progress in AI diversity as a result.

“The focus on the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation and tokenization that are causing people to leave or avoid working in the AI sector altogether,” they wrote.

The authors suggested AI companies change their hiring practices to maximize diversity by including targeted recruitment beyond elite universities, ensuring a more equitable focus on underrepresented groups and creating more pathways for contracts, temps and vendors to become full-time employees. They also said leaders should commit to more transparency around their hiring and promotional practices.

5. Commercial tools for classifying and predicting race and gender are damaging.

A host of existing AI algorithms claim to predict things like criminality based on facial features, assess worker competence by scrutinizing “micro-expressions” and detect sexuality from headshots, but the authors said those tools are just perpetuating stereotypes. The systems are replicating patterns of racial and gender bias that have been present in science for hundreds of years, deepening and justifying historical inequality.

The authors said the most effective thing AI companies could do right now is increase their level of transparency.

“Remedying bias in AI systems is almost impossible when these systems are opaque,” they wrote. “Transparency is essential and begins with tracking and publicizing where AI systems are used and for what purpose.”

""

After graduating from Indiana University-Bloomington with a bachelor’s in journalism, Anicka joined TriMed’s Chicago team in 2017 covering cardiology. Close to her heart is long-form journalism, Pilot G-2 pens, dark chocolate and her dog Harper Lee.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup