Do you think the police should be allowed to use facial recognition technology? It’s a question I get asked almost every day as the UK’s Biometrics and CCTV Commissioner.
Take the case of Frank R. James. In April, James set off smoke bombs in a crowded subway train in Brooklyn, New York, before shooting ten commuters and disappearing in the ensuing chaos. However, police quickly verified his identity by finding the key to his rental van near the crime scene, along with a 9mm semi-automatic handgun, fireworks, unexploded smoke grenades and a hatchet. The ensuing manhunt for James involved hundreds of officers from multiple federal and state law enforcement agencies. It lasted 29 hours.
Take a picture one hour after the incident and, in many ways, you have an exemplary justification for the use of live facial recognition in law enforcement: a terrorist attack in a densely populated, taking place on a transport system equipped with extensive surveillance camera systems, an identified suspect and available footage of what he looks like. He’s armed, he fired 33 rounds into a crowded car and detonated several devices – and he disappeared.
If, at this time, the police had the technical ability to feed James’ image into the combined area surveillance systems and “order” the cameras to search for their suspect, on what basis could they refuse to do so? do it responsibly? It’s the “if” we need to recognize now, as the capabilities of live facial recognition continue to mature. However, the parameters of where, when, how and by whom the technology can be used in less extreme cases remain undefined.
In James’ case, it does not appear that such a level of surveillance capability was available. Instead, law enforcement named the suspect and released his photo, urging the public to continue sending them footage from the crime scene and elsewhere as they piece together his movements. This response and citizens’ reliance on technology – and their willingness to share it – is also a key feature of the evolution of policing of public space. Here’s why.
The Supervisory Relationship
Surveillance of public space by the police in England and Wales is largely governed by the CCTV Code of Practice. But the practice has moved away from the world originally envisioned by the Code’s framers, a world in which police needed images of the citizen towards the one where he also needs images of the citizen. Following any incident, many police forces now routinely make public requests for images that may have been captured on personal devices.
Not all of these interactions between the public and the police are benign or predictable. Often, the citizen also captures footage of the police themselves: the faces of many law enforcement officers who visited the Brooklyn crime scene, for example, were shown around the world on television channels. information. As people now have access to surveillance tools that only a decade ago were reserved for state agencies, the risks that facial recognition technology will be used to thwart vital aspects of our criminal justice system such as witness protection, victim relocation and covert operations are evident – an aspect that has received relatively little attention in the many debates on the subject.
Some might say that if a city were to synthesize its global surveillance capability across its transportation network, street cameras, traffic and dashboard cameras, body-worn devices, and employee smartphones, it would be like asking citizens to send in their images – albeit in a much more efficient, effective and less randomly intrusive way. Arguably yes, but to get to the frozen moment above, a city would first need to develop a fully integrated public space surveillance system equipped with facial recognition technology, sound and voice analytics, vehicle license plate readers and a host of other features invisible to the naked eye.
Content from our partners
Once installed, the capabilities of such an integrated surveillance system would extend far beyond detecting suspected terrorists as they flee the scene of an attack. It would, for example, be spectacularly effective at ticketing barriers, only letting through passengers known to have purchased a ticket or travel card. It is said to be unrivaled in its ability to find people wanted for other crimes – from sex and speeding offenders to illegal immigrants or former prisoners who have breached the terms of their permits. But would the use of intrusive surveillance be justified to suppress all these types of criminal activity? If not, which ones and who would decide?
Possible, allowed, acceptable
Even if such an integrated surveillance system remained to be built, similar questions must be asked about the use of facial recognition algorithms by the police and other public institutions. Not only do the technological capabilities of these systems need to be examined, but also the legal rationale and societal expectations behind their use.
Valid questions persist, for example, about the accuracy of facial recognition algorithms in identifying faces in a crowd, especially those belonging to non-white people. One must also consider the scale on which such a system is supposed to operate: how many millions of faces, for example, is it proportionate to scan in order to train it to find a person who did not appear in court for drunkenness and mess?
Then, of course, there is the issue of education. A basic understanding of the mechanics behind the technology by the public is essential if law enforcement is to gain societal support for its large-scale deployment. However, it’s still unclear how many people actually understand that facial recognition algorithms require “training” from thousands, if not millions, of sample images. Citizens would also like to know what the threshold really is for the use of facial recognition technology: whether, in fact, it is simply used to track down those suspected of having committed serious crimes, or extends to spotting individuals not complying with Covid-19 regulations.
We also need to trust our technology partners. After all, most of the UK’s biometric surveillance capability is provided by private companies, primarily Hikvision and Dahua. The fact that these two companies have recently been accused of contributing to the persecution of Uyghur Muslims in Xinjiang province by the Chinese government (allegations both companies deny) underscores the need to build ethical standards into public procurement. not just facial recognition technology by the government. , but CCTV cameras in general. Police and government agencies are unlikely to gain the public’s trust to use such systems when the make and model of the cameras in our schools, hospitals and public places are the same as those ringing the fences of concentration camps. .
Channels to hold government and law enforcement agencies accountable for the use of this technology are also essential. This includes establishing clear guidelines on minimum standards for the use of facial recognition algorithms in schools, police stations and other public buildings, the level of accreditation – if any – required by companies for their use in commercial spaces, and how members of the public can complain if they believe their image has been improperly collected.
In a technology-driven world where decisions are increasingly likely to be automated, the need for clear oversight and accountability is evident. Otherwise, to paraphrase Hannah Arendt, we will have a tyranny of surveillance without a tyrant.
The issue of surveillance of our time
Last month, I had the pleasure of being invited to speak at the launch of the Ada Lovelace Institute’s three-year research into the challenges of biometric technology. The Ryder Review examines the legal and societal landscape in which future policy discussions on the use of biometrics will take place and the extent to which current distinctions between established regulated biometrics (fingerprints and DNA) and others, including facial recognition, adequately reflect both risks and opportunities.
The event noted that while it has been more than a decade since the government abandoned the concept of mandatory ID cards, we are nonetheless seeing a shift in the standard model of policing humans in search of other humans to an automated, industrialized process – a moment that has been compared by some to the shift from angling to deep sea trawling. transportation hubs, outdoor arenas, or school grounds based on an AI-generated selection and required to prove our identity to the satisfaction of the review officer or the algorithm itself. The ramifications of AI-based facial recognition in policing and law enforcement are both deep enough to be taken seriously and close enough to require our immediate attention.
After the arrest of alleged Brooklyn assailant Keechant Sewell, New York City Police Commissioner said, “We were able to shrink his world quickly, so he had nowhere to go.
Facial recognition technology will dramatically increase the speed at which police can shrink the world of a fugitive terrorist suspect in the future. To what extent it should be allowed to shrink everyone’s world by doing this is the surveillance question of our time – a question which, until now, has not been answered.