Too much too soon? Elon Musk, other tech leaders pause AI development to review ethics
More than 1,100 industry leaders have banded together in an open letter urging a pause on “giant” AI experiments.
Specifically, the letter––which is signed by well-known figures across industries such as Elon Musk, Apple co-founder Steve Wozniak, and numerous professors and researchers––asks AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, the fourth multimodal large language model from OpenAI.
The letter comes as healthcare is gaining more hype for its potential across industries and attracting millions in investments. Healthcare is one of the biggest areas of opportunity for AI, with more than 500 health AI algorithms cleared by the Food and Drug Administration (FDA) over the last several years. So far, AI in healthcare has been mostly used for radiology and imaging purposes, as well as administrative tasks. However, there is potential for further use, even as patients may be hesitant to let AI direct their healthcare decisions, according to one recent study.
According to signatories of the letter, AI is developing too fast, before important human ideological questions have been answered. For example, the letter notes we must first answer these questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Currently, the answers to those questions and the decisions are in the hands of technology leaders conducting big AI experiments and creating new innovations, the letter observed. By pausing current AI projects, humanity will have time to answer those questions. The signatories asked projects to implement a pause, and, if they will not, for governments to step in and institute a moratorium.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.
The pause would also enable scientists to create a shared set of safety protocols for advanced AI design and development that could be “rigorously” audited and checked by outside experts. The letter also hoped for regulatory authorities dedicated to AI, oversight and tracking of AI systems, liability in the case of AI harm, public funding for AI safety research, and resources to deal with the upheaval of AI in the economy and political environment.