Live facial recognition technology used by the Metropolitan and South Wales police forces has been branded ‘unethical’ and possibly illegal in a new report. Researchers at the Minderoo Center for Technology and Democracy at the University of Cambridge have called for an end to its use, saying “the risks far outweigh any small benefit that may be derived from its use”.
The technology captures footage from a live CCTV camera feed, searches for faces and compares characteristics to those on a pre-defined watchlist of “people of interest” in real time. When one is spotted, it generates an alert that officers can then investigate.
The problems with live police facial recognition
Researchers at the Minderoo Center created “minimum ethical and legal standards” that should be used to govern any use of facial recognition technology and tested these standards against how UK police forces use it, finding that all they do not meet the minimum.
Professor Gina Neff, executive director of the centre, said her team had compiled a list of all ethical guidelines, legal frameworks and current legislation to create the measures used in the tests. These are not legal requirements, but rather what the researchers believe should be used as a benchmark to protect privacy, human rights, transparency and bias requirements, and to ensure accountability and provide oversight of the use and storage of personal information.
All current police uses of live facial recognition have failed the test, says Professor Neff. “These are complex technologies, they are difficult to use and difficult to regulate with the laws we have on the books,” she said Technical monitor. “The level of accuracy achieved does not guarantee the level of invasiveness required for them to work.
“These implementations of facial recognition technologies have not been proven to be effective and have not been proven to be safe. We want documentation of how the technology is used, regulated and monitored, and call on the police to stop using live facial recognition technology.
A spokesman for the Metropolitan Police said there was a sound legal basis for the way it used live facial recognition technology, explaining that there had been significant judicial review in both the Divisional Court and the Court of Appeal.
The Court of Appeal recognized the legal basis through common law police powers, finding that the existing legal framework for regulating the deployment of LFRs contained safeguards and allowed scrutiny of the use and whether there was a proper law enforcement purpose and whether the means used were strictly necessary.
Content from our partners
The technology has enabled the Met to “locate dangerous individuals and those who pose a serious risk to our communities”, the spokesman explained. Officers are deploying it with a primary focus on the most serious crimes and finding people wanted for violent crimes or those with outstanding warrants who are proving hard to find, they said.
“Operational deployments of LFR technology support longer-term violence reduction initiatives and have led to a number of arrests for serious offenses including conspiracy to supply class A drugs, assaulting emergency workers, possession with intent to supply of class A and B drugs, grievous bodily harm and being unlawfully at large after escaping from prison,” the spokesman added.
The current use of facial recognition by the police does not meet standards
Professor Neff says the study looked at best practice and existing principles, measuring against already existing guidelines and found them lacking. “According to the best principles, practices and laws that exist today, these implementations do not meet this standard,” she says.
“That’s why we say the technology is not fit for purpose. They do not meet the standards for safe operation of large-scale data systems. For example, what safeguards were put in place during the procurement process? This may not be covered by the rule of law used in a court case, but it is something we have guidelines for using in public agencies.
Deputy Information Commissioner Stephen Bonner said Technical monitor: “We continue to remind police forces of their responsibilities under data protection law when using live facial recognition technology in public places. This includes ensuring that implementation is strictly necessary, proportionate and the public is informed.”
Imogen Parker, associate director of policy at the Ada Lovelace Institute, which recently carried out a wide-ranging legal review of the management of biometric data in England and Wales, said this new research highlighted the ethical, legal and societal concerns surrounding biometric technology.
Ryder’s review of biometrics regulation found that existing legal protections are fragmented, vague and “not fit for purpose”.
“The fact that all three audited police deployments failed to meet minimum ethical and legal standards continues to demonstrate how governance gaps lead to harmful and legally questionable deployments in practice,” Parker said in an emailed statement.
“The challenges presented by biometric technologies are not limited to facial recognition. Attempts to use biometric data to infer emotions, characteristics or abilities – often without a scientific basis – raise serious questions about responsible and legal use, something the ICO has recently highlighted.”
New legislation is needed to govern facial recognition
Parker argued there was an urgent need for new, comprehensive legislation aimed at governing all biometric technology, not just facial recognition and police use. “This needs to be monitored and enforced by a new regulatory function that requires minimum standards of accuracy, reliability and validity, as well as an assessment of proportionality, before these technologies are deployed in a real-world context,” she says.
Without it, the legal basis for live facial recognition will remain unclear, she says, adding that “the risk of harm continues. There should be a moratorium on all uses of one-to-many facial identification in public places until such legislation is passed.”
A Home Office spokesman said Technical monitor: “Facial recognition plays a crucial role in helping the police deal with serious crime, including knife crime, rape, child sexual exploitation and terrorism. It is right to support and empower the police to use it, but we are clear that it must be done in a fair and proportionate way to maintain public trust.
Professor Neff says the message is simple: “Don’t implement this technology, as the risks are far greater than any small benefit that might be gained from using it. Don’t.”
Police use of live facial recognition “unethical” and possibly illegal