NYPD's Patternizr crime analysis tool raises AI bias concerns

The NYPD has rolled out Patternizr, a machine learning algorithm that helps analysts identify crime patterns but highlights the complexity around eliminating AI bias in real-world systems.

The New York Police Department has touted the successful use of its homegrown crime analysis tool to identify potential criminals, but while the pattern recognition tool highlights the widespread potential for advanced analytics, it also raises questions around AI bias.

Patternizr -- a set of machine learning models developed in-house at the NYPD -- is the first crime analysis tool of its kind to be used in law enforcement. It searches through hundreds of thousands of crimes across all 77 precincts in the department's database to find patterns. A "pattern" is considered a series of crimes likely committed by the same criminal or criminals, based on a set of identifying characteristics.

The NYPD deployed Patternizr in December 2016 but only recently revealed its existence in an issue of INFORMS Journal on Applied Analytics. Before, analysts had to manually sift through reports to identify patterns -- a process that was time-consuming, often constrained to one precinct, based on memory and liable to inefficiency, according to an article published in the journal.

The NYPD hired approximately 100 new analysts to perform crime analysis, training them to use Patternizr as part of their daily routine. Devora Kaye, an NYPD spokesperson, said the organization embraces advanced analytics -- and the analysts who enable it.

"Analytics will continue to play an increasingly important role in law enforcement to help ensure public safety," Kaye said in an email. "However, it's only one important component of good policymaking. Any effective and fair law enforcement policy should be transparent to the public, incorporate feedback from impacted stakeholders and be frequently evaluated for impact."

Screenshot of the NYPD's Patternizr crime analysis tool
Patternizr can take a 'seed complaint' and identify similar complaints based on feature comparison. Grey boxes obscure sensitive and personally identifiable information.

A question of fairness and bias

There are high stakes when applying advanced analytics and machine learning algorithms to general populations, however, and Gartner analyst Darin Stewart said he's concerned about Patternizr inadvertently perpetuating bias.

The NYPD used 10 years of manually identified pattern data to train the pattern recognition tool, removing sensitive attributes like gender, race and specific location from the training data and pattern identification process in the hopes of avoiding bias. Unfortunately, Stewart said that's likely not enough to prevent bias in machine learning-based crime analysis.

"Removing race and gender as explicit factors in the training data are basically table stakes -- the necessary bare minimum," said Stewart, who authored a Gartner report on how to control AI bias last year. "It will not eliminate -- and potentially won't even reduce -- racial and gender bias in the model because it is still trained on historical outcomes."

As Patternizr casts its net, individuals who fit a profile inferred by the system will be swept up. At best, this will be an insult and an inconvenience. At worst, innocent people will be incarcerated.
Darin StewartAnalyst, Gartner

If racial or gender bias played any role in past police actions or pattern identifications, whether or not that's explicitly captured in the data, the models' predictions will still be impacted by race and gender, Stewart explained. He acknowledged that Patternizr may improve public safety for the NYPD and the community it serves, but said there will inevitably be collateral damage if crime analysts increasingly rely on the pattern recognition tool.

"As Patternizr casts its net, individuals who fit a profile inferred by the system will be swept up," Stewart said. "At best, this will be an insult and an inconvenience. At worst, innocent people will be incarcerated. The community needs to decide if the benefit of a safer community overall is worth making that same community less safe for some of its members who have done nothing wrong."

The NYPD responded to Stewart's concerns in a statement: "We shared the concern of the analyst. That's why, in addition to hiding race, gender and specific location from the algorithm, we ran our fairness test, which measured whether we saw any sign of bias in model predictions. Our tests found that Patternizr did not recommend crimes with suspects of any race at a rate higher than random."

The New York Civil Liberties Union told the Associated Press that it had not been allowed to review Patternizr before it was rolled out and said an independent audit of the system is important in ensuring fairness and transparency.

The importance of human oversight

According to Kaye, the NYPD is working to expand the use of Patternizr beyond burglaries, robberies and grand larcenies to apply to petty larcenies and potentially other areas of crime analysis, upping the stakes for analysts to ensure bias stays out of the pattern recognition tool. Luckily, several levels of expert human review are still required on top of Patternizr to establish a pattern. This is in the hopes of minimizing the potential for a seemingly likely -- but incorrect -- recommendation to result in law enforcement action.

"For government applications -- particularly those with high costs to a false positive -- it is important to have a human in the loop," Kaye said. "Preserving the final decision with an individual not only helps pattern recognition technology be more accurate, but it also clarifies accountability when mistakes do occur."

Debra Piehl, the NYPD's senior crime analyst, echoes this sentiment, saying the pattern recognition tool works well when combined with oversight by crime analysts. "It still allows the analysts that work for me to apply their own thinking and analysis," she said. "The science doesn't overwhelm the art."

Dig Deeper on Data science and analytics