Too much too soon? Elon Musk, other tech leaders pause AI development to review ethics

More than 1,100 industry leaders have banded together in an open letter urging a pause on “giant” AI experiments. 

Specifically, the letter––which is signed by well-known figures across industries such as Elon Musk, Apple co-founder Steve Wozniak, and numerous professors and researchers––asks AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, the fourth multimodal large language model from OpenAI.

The letter comes as healthcare is gaining more hype for its potential across industries and attracting millions in investments. Healthcare is one of the biggest areas of opportunity for AI, with more than 500 health AI algorithms cleared by the Food and Drug Administration (FDA) over the last several years. So far, AI in healthcare has been mostly used for radiology and imaging purposes, as well as administrative tasks. However, there is potential for further use, even as patients may be hesitant to let AI direct their healthcare decisions, according to one recent study.

According to signatories of the letter, AI is developing too fast, before important human ideological questions have been answered. For example, the letter notes we must first answer these questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

Currently, the answers to those questions and the decisions are in the hands of technology leaders conducting big AI experiments and creating new innovations, the letter observed. By pausing current AI projects, humanity will have time to answer those questions. The signatories asked projects to implement a pause, and, if they will not, for governments to step in and institute a moratorium.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.

The pause would also enable scientists to create a shared set of safety protocols for advanced AI design and development that could be “rigorously” audited and checked by outside experts. The letter also hoped for regulatory authorities dedicated to AI, oversight and tracking of AI systems, liability in the case of AI harm, public funding for AI safety research, and resources to deal with the upheaval of AI in the economy and political environment.

See the letter here. 

 

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.