AI innovators lauded for sharing data, questioned for making it open-access

Testing a novel deep-learning algorithm for detecting tumors in data-rich 3D breast imaging, researchers at Duke University found their model had only so-so performance in the task at hand.

However, in developing and announcing the system, they have succeeded in demonstrating considerable “scientific spirit” by generously showing their work and making it freely available.

The commendation comes from Joann Elmore, MD, MPH, of UCLA and Christoph Lee, MD, MBA, of the University of Washington. Their remarks posted Aug. 16 in JAMA Network Open as invited commentary on a study published the same day.

In the study itself, Mateusz Buda, MSc, Maciej Mazurowski, PhD, and colleagues at Duke describe their work building a deep-learning system with more than 22,000 digital breast tomosynthesis (DBT) image volumes from more than 5,000 patients. The algorithm had a sensitivity of 65%.

Unexceptional though that was, the team achieved all its stated objectives:

  • Curate, annotate and make publicly available a large-scale dataset of digital breast tomosynthesis (DBT) images to facilitate the development and evaluation of artificial intelligence algorithms for breast cancer screening,
  • Develop a baseline deep learning model for breast cancer detection and
  • Test the model using the dataset to serve as a baseline for future research.

Along with a curated and annotated image dataset, the share includes code, network architecture and trained model weights.

In their discussion, Buda et al. comment that the experimental AI resources they’ve made publicly available represent

a challenging but realistic benchmark for the future development of methods for detecting masses and architectural distortions in DBT volumes. These factors, including different types of abnormal results, exclusions of different types of cases, and different evaluation metrics, make it difficult to compare our method with those previously presented in the literature. This further underlines the importance of the dataset shared in this study.”

The accompanying commentary by Elmore and Lee isn’t all cheers.

On the contrary, it builds on and underscores several limitations acknowledged by Buda and colleagues.

For one, they write, opening access to the experimental dataset “brings up the issue of patient privacy concerns and the ethics of sharing patients’ medical image data with those who stand to potentially benefit from future commercial development of algorithms using these images.”

More:

Although the study by Buda et al. does not exceed the performance of already available AI algorithms for screening mammography, the positive outcome remains their attempt to openly share data. However, datasets made public must be of better quality and representative of a screening population to be truly useful. Future models will otherwise risk being trained and tested on the wrong ground truth.”

Study here, commentary here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup