“They’ve probably saved millions of lives using these technologies,” he says, “and the result is that it sells [the necessity of] government surveillance of many Chinese.
Does “good” surveillance technology exist?
Once someone (or some entity) starts using surveillance technology, the downward slope is extremely slippery: no matter how noble the motive for developing and deploying it, the technology can always be used for more nefarious purposes. For Chin and Lin, China shows how “good” and “bad” uses of surveillance technologies are always intertwined.
They report extensively on how a surveillance system in Hangzhou, the city that is home to Alibaba, Hikvision, Dahua and many other technology companies, is built on the well-intentioned premise of improving the city’s governance. Here, with a dense network of street cameras and a cloud-based “city brain” processing data and issuing orders, the “smart city” system is used to monitor disasters and enable rapid responses to emergency situations. In one notable example, the authors spoke with a man who accompanied his mother to the hospital by ambulance in 2019 after she nearly drowned. The city was able to turn on all the traffic lights along their route to reduce the time it took to reach the hospital. It is impossible to argue that this is not a good use of technology.
But at the same time, it’s gotten to the point where “smart city” technologies are almost indistinguishable from “safe city” technologies that aim to improve police forces and track down suspected criminals. The surveillance company Hikvision, which partially powers the life-saving system in Hangzhou, is the same one that facilitated the mass incarceration of Muslim minorities in Xinjiang.
China is far from the only country where police rely on an increasing number of cameras. Chin and Lin highlight how the NYPD has used and abused cameras to create a facial recognition database and identify suspects, sometimes with legally dubious tactics. (The MIT Technology Review also reported earlier this year how police in Minnesota built a database to monitor protesters and journalists.)
Chin argues that given this experience, technology itself can no longer be considered neutral. “Some technologies by their very nature lend themselves to harmful uses. Especially with AI applied to surveillance, they lend themselves to authoritarian outcomes,” he says. And just like nuclear researchers, for example, scientists and engineers in these fields need to be more careful about the technology’s potential harm.
It is still possible to disrupt the global surveillance technology supply chain
There’s a sense of pessimism when talking about how surveillance technology will advance in China, because invasive deployment has become so widespread that it’s hard to imagine the country reversing course.
But that doesn’t mean people should give up. One key way to intervene, Chin and Lin argue, is to disrupt the global supply chain of surveillance technology (the MIT Technology Review network wrote about this just last month).