Search
  • John Ng

Thoughts on "Crime Distortion within the NYPD" by Thomas & Wolff

Updated: Feb 3, 2021

Article Summary: Thomas & Wolff (2020) present a possible method to quantify how much crime was distorted between 2000 and 2013 at the NYPD. They specifically focus on statistics related to burglary because its availability. They offer a ratio calculation whereby possible crime misclassifications (downgraded versions of burglary, e.g., criminal mischief) are compared against actual reports of burglary. The higher the ratio, the authors argue, reflects a greater level of crime distortion or crimes being downgraded (for example, higher levels of petit larceny with a corresponding reduction in burglary reports). Their first analysis was to examine group-based trajectories and they observed three distinct groups - those with the lowest crime severity ratio, those who had a slight increase in crime severity, and then the third group that showed the largest increase. Then they looked at precinct level characteristics (e.g., measures of affluence, commanding officer's rank, rate of officers per 1,000 precinct residents, etc) to see how those factors contributed to these distinct trajectories. Characteristics that were significant included precinct affluence, number of residents sentenced to prison, percentage of the district that was foreign born and age cohorts, and population density.


Reflections: Although there's a possibility of crime distortion, it's important to consider that there are several factors that can affect how crime is being reported (see Porter et al., 2020) including as the authors note, police discretion. Officers interpret and gather evidence of a crime, then based on this evidence, they determine how a crime is recorded. More serious crimes could potentially be downgraded because of an intentional lack of investigative integrity. Regardless of whether an agency adheres to CompStat principles or not, there should be checks and balances to ensure that a criminal incident is thoroughly investigated, documented, and reviewed by supervisors. In some cases, there may not be a nefarious reason for officers to code incidents in certain ways - in some cases, officers come across an uncooperative complainant or witness, or that there may not be sufficient evidence to suggest a certain offence occurred. Officers should be coding crimes based on what evidence exists and based on local law. Similarly, it's possible that how an incident is coded could change based on further investigation - crime statistics are dynamic.


In addition, the authors admit, their study was based on open-source data and they did not have access to specific details of NYPD data/files. Unless there is an audit of the data to determine if the evidence collected in the police report warrants a certain code, it's only speculative about whether there was intentional or unintentional data manipulation. Officers may in fact be coding based on available evidence.


Intentional data manipulation though is possible due to the cultural and external pressures that surround the CompStat. Some critics of CompStat suggest that it is too focused on crime statistics as the bottom line of policing (Moore, 2002). Likewise, the more that policing is focused on crime statistics (and the need to reduce these numbers) as their bottom line, there is a greater incentive to manipulate data. As Sparrow (2015) pointed out, there are various metrics that should be used to measure the performance of police since policing has multiple missions and should be measured on several metrics, not just purely based on crime statistics (see also Silverman, 2006). For example, the public expects their police agency to control crime but not at the expensive of excessive spending or deteriorating public confidence.


Although geographical accountability is a key aspect of CompStat, the spirit of CompStat was really about problem solving (Weisburd et al., 2003; Silverman, 2006; Weisburd et al., 2006) - it was about identifying problems, analyzing them, and then deploying appropriate resources to that problem. In some sense, it was intended to be a step towards institutionalizing problem-oriented policing (Weisburd et al., 2006) and not about “chasing the numbers”. The notable pressure that's experienced in some CompStat meetings suggest that not only do commanders need to know what's going on in their jurisdiction but that they need to have an operational solution right away - even if it's based on superficial analysis (Ratcliffe, 2017). Such pressure can create an atmosphere where manipulation of crime statistics can occur to reflect positively on the commander.


One of the concerns that the authors point out comes from Eterno and Silverman (2006, 2010, 2012), namely that data from other sources asides from the NYPD data were at odds at data from the NYPD. This isn't necessarily surprising though. Under-reporting is common in police data and hence the reason for public safety officials to not just rely on crime/police data but other sources of data including victimization surveys. Extending on this, public safety is a community endeavour - one that requires a collective effort. It also means that police need to work with external partners to ensure that the trends that they're observing are similar to those of their partners (or if they're not, then determining why that might be), thereby creating opportunities to share information and creating a comprehensive understanding of what's going on in communities.


CompStat should be about problem solving, using multiple measures of performance, rigourous evaluations to determine what works (a component of evidence-based policing, EBP), and importantly collective institutional learning (instead of working in silos). As the authors mention, intentional manipulation can undermine data-driven approaches to policing including intelligence-based policing and EBP. Lastly, agencies that claim that they value data-driven approaches need to prioritize data quality by having checks and balances, along with regular audits of their data.


The issue of misclassification speaks not just to the quality of the data but how important data integrity is to the agency - how the data are captured, maintained, reviewed, and reflected in publicly available statistics. It reflects the professionalism of the agency and more broadly the culture and leadership of the organization. Data integrity cannot be understated, particularly during a time, as the authors note where there is "increased accountability, professionalism and transparency" for policing.


John Ng, Hons B.Sc., M.S.


Certified Law Enforcement Analyst, International Association of Crime Analysts

Director of Operations, Canadian Society of Evidence Based Policing

Publications Committee Chair, International Association of Crime Analysts


References:


  • Eterno, J. A., & Silverman, E. B. (2006). The New York City Police Department’s Compstat: Dream or nightmare? International Journal of Police Science & Management, 8(3), 218–231.

  • Eterno, J. A., & Silverman, E. B. (2010). The NYPD’s Compstat: Compare statistics or compose statistics. International Journal of Police Science & Management, 12(3), 426–449.

  • Eterno, J. A., & Silverman, E. B. (2012). The crime numbers game: Management by manipulation. CRC Press. Eterno, J. A., Verma, A., & Silverman, E. B. (2016). Police manipulation of crime reporting: Insiders’ revelations. Justice Quarterly, 33(5), 811–835.

  • Moore, M. H. (2002). Recognizing Value in Policing: The Challenge of Measuring Police Performance. Washington, D.C.: Police Executive Research Forum.

  • Porter, L.C., Curtis, A., Jefferis, E., & Mitchell, S. (2020). Where's the crime? Exploring Divergences between Call Data and Perceptions of Local Crime. British Journal of Criminology (60), 444-467.

  • Ratcliffe, J. (2017, September 25). It's time for CompStat to Change. International Association of Directors of Law Enforcement Standards and Training. https://www.iadlest.org/Portals/0/Files/Documents/DDACTS/Docs/Evidence/Jerry%20Ratcliffe%20Compstat%20Blog.pdf?ver=2020-01-01-105605-447

  • Lastname, F. M. (Year, Month Date). Title of page. Site name. URL

  • Silverman, E.B. (2006). CompStat's Innovation. In D. Weisburd & A.A. Braga (Eds). Police Innovation: Contrasting Perspectives (pp. 267-283). Cambridge University Press.

  • Sparrow, M. K. (2015). Measuring performance in a modern police organization. Psychosociological Issues in Human Resource Management, 3(2), 17–52.

  • Weisburd, D., Mastrofski, S., McNally, A.M., Greenspan, R., & Willis, J. (2003). Reforming to Preserve: Compstat and Strategic Problem Solving in American Policing. Criminology and Public Policy (2), 421-456.

  • Weisburd, D., Mastrofski, S.D., Willis, J.J., & Greenspan, R. (2006). In D. Weisburd & A.A. Braga (Eds). Police Innovation: Contrasting Perspectives (pp. 284-301). Cambridge University Press.

108 views0 comments