AI’s black box could help or hinder the anti-COVID cause

A generation gap may be emerging within the gold rush of computer scientists and software developers racing to weaponize AI for the fight against COVID.

Seasoned stakeholders may not hesitate to believe it’s best to let AI be AI, black boxes and all. Humans, some on this side argue, can’t retrace every muscle twitch of every step in their decision-making odysseys. Isn’t the whole point of developing artificial neural networks getting them to do what biological ones can do?

And as long as the unexplained rationale produces sensible results, why insist on a blow-by-blow account of the dispassionate computations?

Their younger peers are inclined to be more wary, concerned that biased data and variables might render algorithms that are just as discriminatory as people even when programmers’ intentions are undeniably good.

And at a time when our society is doing so much to root out inequitable distributions of access and resources, it’s imperative to counterbalance AI’s cold calculations with human powers of empathy.

Isn’t it? 

The Israel-based writer, filmmaker and game designer Boaz Lavie gives the gathering conundrum a careful look in an essay running in the Israeli newspaper Haaretz.

For the older set’s mindset, Lavie considers the outspoken opinions of Geoffrey Hinton, the cognitive psychologist and computer scientist known as the “godfather of deep learning.”

“[T]he ardor with which Hinton defends the black box’s lack of transparency goes beyond reasons of principle,” Lavie writes. “It derives also from his desire to somehow preserve these algorithms within the realms of engineers and mathematicians—and not social scientists or lawmakers.”

Meanwhile younger scientists “see something rather different in these algorithms, not having been present at the creation, as it were,” Lavie observes. “From their point of view, these tools are meant to work alongside us, but not be subservient to us, as was the case in the 19th and 20th centuries. Accordingly, human standards must be applied to them.”

En route to working through the piece’s central question—it’s headlined “Algorithms Can Help Fight COVID-19. But at What Cost?”—Lavie walks the reader through a worthwhile primer tracing AI’s decades-long travels to these pandemic times.

It’s not clear the divide he describes shakes out along generational lines. In any case, his observations are intriguing. Click here to read the whole thing.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.