Too much too soon? Elon Musk, other tech leaders pause AI development to review ethics

More than 1,100 industry leaders have banded together in an open letter urging a pause on “giant” AI experiments. 

Specifically, the letter––which is signed by well-known figures across industries such as Elon Musk, Apple co-founder Steve Wozniak, and numerous professors and researchers––asks AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, the fourth multimodal large language model from OpenAI.

The letter comes as healthcare is gaining more hype for its potential across industries and attracting millions in investments. Healthcare is one of the biggest areas of opportunity for AI, with more than 500 health AI algorithms cleared by the Food and Drug Administration (FDA) over the last several years. So far, AI in healthcare has been mostly used for radiology and imaging purposes, as well as administrative tasks. However, there is potential for further use, even as patients may be hesitant to let AI direct their healthcare decisions, according to one recent study.

According to signatories of the letter, AI is developing too fast, before important human ideological questions have been answered. For example, the letter notes we must first answer these questions: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

Currently, the answers to those questions and the decisions are in the hands of technology leaders conducting big AI experiments and creating new innovations, the letter observed. By pausing current AI projects, humanity will have time to answer those questions. The signatories asked projects to implement a pause, and, if they will not, for governments to step in and institute a moratorium.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.

The pause would also enable scientists to create a shared set of safety protocols for advanced AI design and development that could be “rigorously” audited and checked by outside experts. The letter also hoped for regulatory authorities dedicated to AI, oversight and tracking of AI systems, liability in the case of AI harm, public funding for AI safety research, and resources to deal with the upheaval of AI in the economy and political environment.

See the letter here. 

 

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Around the web

California-based Acutus Medical has said its ongoing agreement to manufacture and distribute left-heart access devices for Medtronic is the company's only source of revenue. 

The scam took place over a period of seven years, resulting in Medicare being billed for more than $70 million in fraudulent claims for unnecessary scans. 

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.