European parliamentarian: ‘Who is liable if an AI-based diagnosis is incorrect?’
Emerging technologies like AI and robotics have vast potential to improve healthcare. Few question this. What remains unclear is how meaningful the advances will be to healthcare providers and, more to the point, the patients they serve.
Given the uncertainty, policymakers must step up their efforts to steer technology developers in the right direction, beginning with steps to assure data privacy and security.
Tiemo Wölken, a German politician who serves on the European parliament’s legal affairs committee, makes the case and outlines a way forward in an opinion piece published Sept. 13 in The Parliament.
Wölken asks: Who is liable if an AI-based diagnosis is incorrect? Who is held accountable if a robot-assisted surgery goes wrong? Is it the doctor, the manufacturer or the patient agreeing to the treatment?
“As a lawyer, I believe it is important to point out that the legal liability for damage is a central issue in the health sector where the use of AI is concerned,” he writes. “Furthermore, AI-based systems need to be neutral and fair in order to ensure a non-biased outcome. … It is [policymakers’] responsibility to establish a framework that fosters trust, safeguards data privacy and sees to it that data ownership remains with the patient.”
Read the whole thing: