Notes from China: 3 of the most important—and scariest—things you’ll read about AI all month

In the late 1700s the English social theorist Jeremy Bentham sketched out a prison in which a single guard could control hundreds of inmates. The trick was to let the men know they could be seen 24/7 while the guard on duty was hidden from their view.   

In 2020 Bentham’s vision is, for all intents and purposes, reality—in China. There the lone “guard,” or at least the head one, is an autocrat whose powers of enforcement grow by the day.

And Xi Jinping doesn’t just have 1.4 billion circumspect souls within eyeshot. He also wants to sell his watching technologies and knowhow to like-minded regimes around the world.

Ross Anderson, deputy editor of The Atlantic, visited China last year and does a deep dive into its movable “panopticon”—the name given to Jeremy Bentham’s prison conception—in that publication’s September issue.

It’s a must-read for anyone following ethics concerns raised by emerging AI applications, whether in healthcare or any other sphere of human activity.

That’s because, not surprisingly, AI is the sleepless motor-control center of the Middle Kingdom’s massive surveillance system.

And the system’s aims seem to go well beyond rewarding good behavior patterns and punishing “bad” ones.

Among the most chilling observations Anderson turns in from his reporting are three that deserve an especially wide hearing:  

1. Xi wants to use AI to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.

“Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another,” Anderson writes. “In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens.”

2. AI-powered sensors lurk everywhere, including in the purses and pants pockets of Uighurs. That’s the Muslim subpopulation of which at least 1 million are believed held in Xi’s “reeducation camps” simply for being Muslim.

“Uighurs can travel only a few blocks before encountering a checkpoint outfitted with one of [the] Xinjiang [region’s] hundreds of thousands of surveillance cameras,” Anderson reports. “Footage from the cameras is processed by algorithms that match faces with snapshots taken by police at ‘health checks.’ At these checks, police extract all the data they can from Uighurs’ bodies. They measure height and take a blood sample. They record voices and swab DNA. Some Uighurs have even been forced to participate in experiments that mine genetic data, to see how DNA produces distinctly Uighurlike chins and ears.”

3. The emergence of an AI-powered authoritarian bloc led by China could warp the geopolitics of this century. It could prevent billions of people, across large swaths of the globe, from ever securing any measure of political freedom.

“Much of the planet’s political trajectory may depend on just how dangerous China’s people imagine AI to be in the hands of centralized power,” Anderson points out. “Until they secure their personal liberty, at some unimaginable cost, free people everywhere will have to hope against hope that the world’s most intelligent machines are made elsewhere.”

To read the article in its entirety, click here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup