AI innovators lauded for sharing data, questioned for making it open-access

Testing a novel deep-learning algorithm for detecting tumors in data-rich 3D breast imaging, researchers at Duke University found their model had only so-so performance in the task at hand.

However, in developing and announcing the system, they have succeeded in demonstrating considerable “scientific spirit” by generously showing their work and making it freely available.

The commendation comes from Joann Elmore, MD, MPH, of UCLA and Christoph Lee, MD, MBA, of the University of Washington. Their remarks posted Aug. 16 in JAMA Network Open as invited commentary on a study published the same day.

In the study itself, Mateusz Buda, MSc, Maciej Mazurowski, PhD, and colleagues at Duke describe their work building a deep-learning system with more than 22,000 digital breast tomosynthesis (DBT) image volumes from more than 5,000 patients. The algorithm had a sensitivity of 65%.

Unexceptional though that was, the team achieved all its stated objectives:

  • Curate, annotate and make publicly available a large-scale dataset of digital breast tomosynthesis (DBT) images to facilitate the development and evaluation of artificial intelligence algorithms for breast cancer screening,
  • Develop a baseline deep learning model for breast cancer detection and
  • Test the model using the dataset to serve as a baseline for future research.

Along with a curated and annotated image dataset, the share includes code, network architecture and trained model weights.

In their discussion, Buda et al. comment that the experimental AI resources they’ve made publicly available represent

a challenging but realistic benchmark for the future development of methods for detecting masses and architectural distortions in DBT volumes. These factors, including different types of abnormal results, exclusions of different types of cases, and different evaluation metrics, make it difficult to compare our method with those previously presented in the literature. This further underlines the importance of the dataset shared in this study.”

The accompanying commentary by Elmore and Lee isn’t all cheers.

On the contrary, it builds on and underscores several limitations acknowledged by Buda and colleagues.

For one, they write, opening access to the experimental dataset “brings up the issue of patient privacy concerns and the ethics of sharing patients’ medical image data with those who stand to potentially benefit from future commercial development of algorithms using these images.”

More:

Although the study by Buda et al. does not exceed the performance of already available AI algorithms for screening mammography, the positive outcome remains their attempt to openly share data. However, datasets made public must be of better quality and representative of a screening population to be truly useful. Future models will otherwise risk being trained and tested on the wrong ground truth.”

Study here, commentary here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.