The term "predictive policing" refers to computer systems that use data to forecast where crime will happen or who will be involved. Some tools produce maps of anticipated crime "hot spots," while others score and flag people deemed most likely to be involved in crime or violence.
Though these systems are rolling out in police departments nationwide, our research found pervasive, fundamental gaps in what's publicly known about them.
How these tools work and make predictions, how they define and measure their performance and how police departments actually use these systems day-to-day, are all unclear. Further, vendors routinely claim that the inner working of their technology is proprietary, keeping their methods a closely-held trade secret, even from the departments themselves. And early research findings suggest that these systems may not actually make people safer — and that they may lead to even more aggressive enforcement in communities that are already heavily policed.
Our study finds a number of key risks in predictive policing, and a trend of rapid, poorly informed adoption in which those risks are often not considered. We believe that conscientious application of data has the potential to improve police practices in the future. But we found little evidence that today's systems live up to their claims, and significant reason to fear that they may reinforce disproportionate and discriminatory policing practices.