4 recommendations to combat malicious use of AI

Artificial intelligence (AI) has raised plenty of eyebrows across healthcare, including in radiology, public health and digital health. But a new 100-page report from some of the leading organizations in the development of AI points potential problems with the technology, which could include cyberattacks, drone misuse and large-scale surveillance.

The document, titled, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” was assembled by 26 authors from 14 institutions including the University of Cambridge, OpenAI and the Center for a New American Security.

“Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously,” wrote first author Miles Brundage, an AI policy research fellow with the Future of Humanity Institute, et al. “This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.”

The team attempted to answer a straightforward, but hardly simple, question:

How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI?

The researchers outline a wide array of potential harm with the misuse of AI and machine learning. They discuss data poisoning attacks, AI as a way to prioritize cyberattack targets, and swarming attachs from distributed networks.

The researchers offered four high-level recommendations to adapt to evolving threats related to AI:

1. Policymakers should collaborate with researchers to prevent and mitigate malicious uses of AI.

2. Researchers and engineers should consider misuse and reach out to necessary parties when harmful applications are foreseeable.

3. Best practices should be developed for addressing dual-use concerns.

4. Developers should expand the range of stakeholders and experts in discussing these concerns.

“There remain many disagreements between the co-authors of this report, let alone amongst the various expert communities out in the world,” the authors wrote. “Many of these disagreements will not be resolved until we get more data as the various threats and responses unfold, but this uncertainty and expert disagreement should not paralyze us from taking precautionary action today. Our recommendations can and should be acted on today: analyzing and (where appropriate) experimenting with novel openness models, learning from the experience of other scientific disciplines, beginning multi-stakeholder dialogues on the risks in particular domains, and accelerating beneficial research on myriad promising defenses.”

The report is available for download here.

""
Nicholas Leider, Managing Editor

Nicholas joined TriMed in 2016 as the managing editor of the Chicago office. After receiving his master’s from Roosevelt University, he worked in various writing/editing roles for magazines ranging in topic from billiards to metallurgy. Currently on Chicago’s north side, Nicholas keeps busy by running, reading and talking to his two cats.

Trimed Popup
Trimed Popup