How should AI be regulated?

As AI technologies have continued to evolve all over the world, it has become more and more evident that proper regulation is necessary. The wrong kind of regulation, however, could have a negative impact on the advancement of AI.    

Asheesh Mehra, co-founder and CEO of the AI and robotics company Antworks, wrote a commentary on this topic for The Next Web.

“The time to talk about AI regulation is now,” Mehra wrote. “However, talking about regulating AI as a technology would be detrimental to societal progression, and it would prove difficult for any government to stop its implementation. But regulations around its application could prove vital in the future.”

Mehra listed healthcare as one of the many “drastic improvements and advancements” AI will be responsible for in the years ahead, noting that the technology will spark new research and quicker clinical trials. He also added that “the application requirements for AI in healthcare” are far different from the requirements for other industries, a key reason why regulating the application of AI makes so much more sense than just regulating the actual technology.

Another key component of this policy debate, Mehra explained, is the risk that exists of the technology being used “with malicious intent.”

“Most cyber-experts predict that cyberattacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application,” he wrote.

Click below to read the full commentary on The Next Web’s website.

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup