UK government introduces AI ‘code of conduct’

The U.K. Department of Health and Social Care recently issued a new code of conduct for AI systems in an effort to ensure that only the topmost and safest AI-based systems are used by the National Health Service (NHS). The 10 principles comprising the code of conduct were established late in 2018 and drafted using expertise from industry thought leaders, academics and patient groups.

The code of conduct will serve as a guide for what the NHS expects from organizations that develop AI systems. It was developed so organizations can “meet a gold-standard set of principles to protect patient data to the highest standards.”

The U.K.’s commitment to AI-based technologies has led to the investment more than £1.3 billion (about $1.6 billion USD) to support AI-based healthcare innovations. Additionally, the U.K. government recently announced it will open five technology centers dedicated to the utilization of AI-based innovations for disease diagnosis.

“Parts of the NHS have already shown the potential impact AI could have in the future of the NHS in reading scans, for example, to enable clinicians to focus on the most difficult cases,” Simon Eccles, MD, chief clinical information officer for Health and Care at the NHS, said in a prepared statement. “This new code sets the bar companies will need to meet to bring their products into the NHS so we can ensure patients can benefit from not just the best new technology, but also the safest and most secure.”

At present, AI-based technologies are being used across the NHS in a variety of ways, including improving the early diagnosis of cardiovascular disease and lung cancer. Additionally, AI is being used in an effort to decrease the number of unwarranted operations patients undergo because of false positives. 

The principles include:

  1. Understand users, their needs and the context
  2. Define the outcome and how the technology will contribute to it
  3. Use data that is in line with appropriate guidelines for the purpose for which it is being used
  4. Be fair, transparent and accountable about what data is being used
  5. Make use of open standards
  6. Be transparent about the limitations of the data used and algorithms deployed
  7. Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision
  8. Generate evidence of effectiveness for the intended use and value for money
  9. Make security integral to the design
  10. Define the commercial strategy

“Artificial intelligence has the potential to save lives, but also brings challenges that must be addressed,” Matt Hancock, U.K. secretary of state for Health and Social Care, said. “We need to create an ecosystem of innovation to allow this type of technology to flourish in the NHS and support our incredible workforce to save lives, by equipping clinicians with the tools to provide personalized treatments.”

""

As a senior news writer for TriMed, Subrata covers cardiology, clinical innovation and healthcare business. She has a master’s degree in communication management and 12 years of experience in journalism and public relations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.