Facial Recognition Technologies in policing — A balancing act?
Share

Authors: Priya Vedavalli and Tvesha Sippy 

 

This piece was written for the Data Governance Network Blog.

 

The executive arm of the European Union has proposed a bill limiting police use of facial recognition in public in real time and other high-risk uses of Artificial Intelligence (AI). There is a sound logic to this move: globally, there isn’t enough proof about the technical accuracy of facial recognition technologies (FRTs), and their benefits must be balanced against the harms and potential infringement of individuals’ rights. These are issues any jurisdiction employing FRTs must think through. Indian policing, on the other hand, has seen an increase in the adoption of such technologies related to facial recognition.

For instance, Hyderabad is among the top 20 cities (18 of which are in China) with the most number of CCTV cameras and the National Crime Records Bureau (NCRB) has put out a request for proposals for the Automatic Facial Recognition System (AFRS). This is against the backdrop of  not just a lack of proof globally, but also without sufficient evidence of FRTs’ effectiveness in the Indian policing context. 

Often technology is prematurely adopted without sufficient governance capability, oversight, and evidence. Sometimes states do this in imitation of other developed country contexts, something that Pritchett, Woolcock and Andrews (2010) describe as  ‘isomorphic mimicry’. An example of this is the push for advanced predictive policing when in fact  the crime data needed for this is of poor quality. The mismatch between expectations and actual capacity to implement these reforms leads to an inevitable stress on the system causing it to ultimately fail in its intended objective. 

First, the accuracy of these technologies is suspect. For instance, according to an independent review of the Police’s FRT use by Pete Fussey, an expert on surveillance from Essex University, it was found to be accurate in just 19% of cases in London. Further, FRTs have been determined to be biased on racial and gender dimensions. MIT’s Gender Shades Project and MIT Media Labs tested Amazon, IBM, and Microsoft’s FRT products. The results found that these technologies worked better on lighter than darker skin tones. Further, they misidentified female faces as male faces — for e.g. IBM Watson erred on this front, 34% of the time. 

Two, these technologies have not been independently evaluated for their effectiveness in an Indian policing context. In an ethnically and skin-tonally diverse country like India, accuracy of these technologies need to be assessed. At present, there is insufficient evidence even on how the ‘hard technologies’ (material-based technologies as compared to soft technologies which are information-based) viz. CCTV cameras that drive FRTs, impact crime. Studies in the US and the UK indicate that they are more effective in reducing property crimes as compared to violent crimes when tested in private spaces like parking lots. Further, their effectiveness in public spaces remains undetermined (Byrne and Marx, 2011). They were also found to be more effective in the UK than the US with reasons ranging from time of gap between follow up, variation in technology used, etc. The lack of comprehensive evidence on the factors that could make these technologies more optimal for reducing crime is even more alarming in an uncharted context such as India. 

Consider the discretionary power that police officers enjoy when constructing the training dataset or ‘watchlist’ for the FRT. This impacts how suspicion is generated. Although envisioned as being automated, operationalising FRTs is not divorced from human intervention: “its use constitutes a socio-technical assemblage that both shapes police practices yet is also profoundly shaped by forms of police suspicion and discretion”  (Fussey et al., 2020). Media reports indicated that the Delhi police used FRTs to screen and filter law and order “miscreants” at a political rally in December, 2019 against a facial dataset containing images collected from prior protest sites. These instances spark concerns about discriminatory or presumptive policing. 

As argued by Neyroud and Disley (2008) one cannot separate the discussion on the effectiveness of these technologies from their impact on civil liberties. They recommend four factors to consider before adoption of any new technology. These factors apply to both the technology itself and the agency using it. One, integrity when it comes to information sharing, particularly with external agencies. Two, clear identification and demonstration of value added to policing through rigorous, independent evaluations. Three, transparency around the rules governing them so that it is accessible for public understanding. And lastly, public confidence in them. This is perhaps the most crucial element in ensuring legitimacy of the police in the eyes of the public. 

Moreover, independent oversight mechanisms, such as the Police Complaints Authority (PCA), fail to  foster confidence. Adequate training on the technical aspects of FRTs, including capturing high quality images, the time period after which the probe image should be deleted, etc. is vital (Garvie, C et al, 2016). Currently, there is a lacuna in technology and investigation training. For instance, close to 70% of newly recruited deputy superintendents of police from Madhya Pradesh feel that they don’t know/partly know the technology even after receiving field training, according to a survey conducted by IDFC Institute. The same survey also revealed that police would like their indoor training to focus more on practical aspects like investigation. 

Despite these shortcomings, the potential benefits that the police and society can reap from the adoption of FRT must be recognised as well. From better evidence gathering, reduced scope of using 'third-degree' methods, reduced cost of crime, and augmented police capacity, there are many arguments that support the utilisation of such technologies. However, the conversation around efficiency needs to be balanced with safeguards and evidence.