Data Science for Law Enforcement
By Kat Campise, Data Scientist, Ph.D.
While data science is generally associated with tooling machine learning algorithms and making precise predictions or helping to design new digital (or non-digital) products for the likes of Google, Facebook, Apple, and Amazon, data scientists can make a positive impact specifically for social good.
Crime is a ubiquitous problem that penetrates a society regardless of the country or culture. Whether an individual or group is focused on stealing tangible property, an individual’s identity or seeking to harm others physically, we live in an era where just about everything we do is recorded and stored on numerous servers throughout the world. As such, a treasure trove of data is available to law enforcement agencies — that is if the social media giants and other data collection organizations (including individuals who may record a crime via their phones) agree to share the data.
In the U.S., the incidences of violent and property crime have steadily decreased since 1990. Per the U.N.’s 2017 report on world crime, similar decreases in crime have also been evident on a global level over the last 17 years. Given that criminals use the same (or similar) tools as non criminals, e.g., smartphones, social media, etc., and machine learning/AI algorithms are becoming “smarter” in their nuanced detection of the likelihood of fraudulent or criminal activities, the downshift in crime statistics shouldn’t be all too surprising.
But, there is definitely more that can be done by using data science tools and processes for identifying an actual threat before the criminal (or criminals) can carry out their plan.
Imperfection of Data Science as Digital Detective Work
Assessing the fine line between an empty threat on Twitter and determining that an outrageously inflammatory threat is the real deal isn’t an easy task. This is compounded by the fact that machine learning algorithms (and potentially, AI algorithms) are susceptible to the biases of those who are creating them. Granted, these biases may not be intentional. But, anyone entering the field of data science needs to be continually aware that, at the end of the day, we are still operating from a filtered perceptual view of the world.
Machine learning and AI operate based on the quality and amount of data fed into them as well as the underlying assumptions of the scientist (or programmer) who is constructing the algorithms. Thus, at every step of the data science process, you must perpetually self-analyze and test your own internal assumptions about what you’re doing in addition to meeting the statistical assumptions for the particular statistical model you’re about to build and deploy.
A recent example of unintended bias is the Microsoft experiment in 2016 where “a chatbot plugged into Twitter famously created a racist machine that switched from tweeting that “humans are super cool” to praising Hitler and spewing out misogynistic remarks.”
Yet another example of predictive algorithm imperfection is the 2014 case where a risk assessment score identified an African American female as having an increased likelihood of being a repeat offender; however, a Caucasian man who actually had been convicted of prior felonies (unlike the aforementioned female) was rated as a lower risk for repeat offenses. There was another twist here: the Caucasian man continued to commit crimes whereas the African American female did not.
The message here is clear: algorithms are a tool for decision making, they are not pure mathematical models that can be left unattended after deployment, nor should they be wholly relied upon. Notably, as AI becomes the go-to method for an increasing number of systems, there is another concern that surrounds the expanding complexity of machine learning and AI algorithms.
Ultimately, AI will develop its own heuristics — or shortcuts for decision making –, and it’s likely this will create a black box scenario whereby “it will become increasingly difficult for even the engineers who created an AI system to explain the choices it made.” As such, it’s imperative that data scientists maintain an awareness of this likelihood.
We do not yet know precisely what outcomes AI will produce, and the data we have regarding individuals is wrought with context; it’s also not perfectly clean nor is it, by any means, entirely complete. Thus, data scientists in law enforcement should, ideally, have knowledge and experience in the world of law enforcement techniques and training; this is in addition to the in-depth math, statistics, and programming skills required for accurate prediction modeling.
Data Science Crime Detection Tools
In the past, law enforcement agencies have been a fragmented data sharing matrix. The situation has improved through the installation of national databases, such as the Federal Bureau of Investigation’s National Crime Information Center, crime mapping software, internal organizational management tools, e.g., CompStat, and OneDOJ — to name just a few.
Law enforcement data scientists might also be tasked with culling information from social media channels and merging that information with data from their internal database; this may or may not require data scraping experience depending on whether they already have someone else dedicated to culling this data.
The caution here is to ensure that you’re building an accurate crime detection profile and maintaining the discernment between the varying degrees of data representing circumstantial evidence (people will run a Google search or post odd things to social media that may or may not be expressions of criminal intent) or pointing towards a resolution to commit a crime.
Also, as frequently stated, not all data points may be available, e.g., text analytics of interview transcripts containing victim or witness statements.
Natural Language Processing and Law Enforcement
This brings us to another area of expertise that law enforcement data scientists should have: natural language processing.
While our handy machines (computers) are wondrous at massive computational tasks, human language, in all of its contextualized diversity and perceptual translation, isn’t readily decomposed into the binary realm of ones and zeros. Human beings are still far more masterful in this area. For example, a deep learning algorithm was programmed to write Burger King advertisements. The algorithm in question arrived at a veritable word salad: Gender reveal bad. Tender reveal young. It is a boy bird with crispy chicken tenders from Burger Thing. So, AI is still not yet ready to replace human writers (or readers) as it doesn’t currently have the capacity to understand the meaning of what it’s producing.
Taken a step further, words (sentences and paragraphs) also have meaning to the reader, which has a high likelihood of being perceived differently than the writer intended.
Summarily, human language has a compendium of moving parts that statistical algorithms need to take into consideration, and this isn’t easy considering math is its own language. Law enforcement isn’t merely a static collection of data points. Accurate assessment of crime requires interpreting eyewitness accounts, 911 dispatch recordings, and police body cam footage (more specifically, trying to understand conversations that can’t be clearly heard while reviewing the footage). While these can be recorded and AI can transcribe them, as well as analyzed for emotional distress and inflection points (this is an area of continued research), the human element is still an important aspect for accurately interpreting what the report is likely to determine.
Areas of study within natural language processing include computational linguistics and computational psychology.
Digital Evidence Management: Body Cams
Video evidence isn’t always easy to analyze.
Everyone (or almost everyone) has a phone-based camera, and body cams are steadily being required for police officers throughout the U.S. Hours upon hours of footage is often reviewed during an investigation, and the video quality may not be ideal due to inclement weather or body cam malfunctioning.
Data scientists can assist in shortening video review time by utilizing AI to designate “a zone in the video frame, where any movement causes an alert to generate.” Once the alert is established, the frame can then be tagged for further analysis.
Facial recognition is another area where data scientists can help law enforcement to identify criminals who have not yet been detained on outstanding warrants.
Although body cam AI is still in the research and refinement stage, facial recognition algorithms can be used to immediately notify police officers if they are in the presence of a perpetrator who has a current warrant out for their arrest.
Data Science Algorithms for Crime Detection
It should be clear by now that algorithmic accuracy is largely dependent on the skill of the statistician or data scientist and the quality of the data.
There are a plethora of statistical models available for use within law enforcement. The list below is not exhaustive; it’s a starting point rather than a comprehensive detailing of all possible models that are relevant to crime detection. Furthermore, only general descriptions are included as an extensive analysis of when, how, and why each statistical tool is beyond the scope of this article.
- Logistic regression: This is one of the most utilized techniques in machine learning (and data science). It’s a relatively straightforward binary classification tool; however, it can expand in complexity. Logistic regression may be used as the model for repeat offender risk scores.
- Clustering algorithms: In terms of crime, cluster analyses can also be used in risk scoring as well as predicting which neighborhoods will have a higher likelihood of increases or decreases in crime rates. There are different types of clustering techniques including hierarchical, centroid-based, distribution-based, and density-based. One or more of the clustering tools can be used contingent on the type of data and the goal of the analysis.
- Convolutional Neural Networks: Many machine learning and AI models have been patterned after biological frameworks, and CNN falls into this classification. Basically, the construct of this model simulates how the visual cortex operates via neuronal-type nodes which constitute layers between the input and output values. The CNN technique is frequently used for image and video analysis.
- Convolutional Deep Belief Networks: Although CDBNs are also used in video analysis, they can be useful for analyzing audio as well. This is particularly the case when high dimensional data is the input value (or values) for analysis. CBNs are comprised of multiple layers with one being visible and the rest are hidden. Classifying the probability of who is speaking and what is being said on audio recordings is one (but not the only) way that CDBNs can be useful in data science for law enforcement.
- Recurrent Neural Networks: RNNs have been used for speech and text (or handwriting) recognition in the past. Within the RNN classification, Long-short term memory (LTSM) networks have also been used for speech recognition and speech-to-text tasks. Emergency call analysis and reconstructing other types of audio transcripts, such as witness and victim interviews, are the possible uses of RNNs in law enforcement.
Research into increasing predictive accuracy is ongoing. Academics, industry-specific machine learning engineers, data scientists, and AI enthusiasts continue on their quest to advance the machine learning, deep learning, and AI tools for a variety of objectives.
Beyond creating a better “virtual assistant” lies an area of application where algorithms and informed human decision-making can, and should, operate hand in hand.
We shouldn’t over-rely on either component. Contrary to the puritanical belief that algorithms are devoid of the biases inherent in human perception, we’ve witnessed that this is, in fact, untrue. The tools we use for prediction are susceptible to confirmation bias.
It can’t be overstated that, ultimately, algorithmic constructs are created by humans for humans — whether to improve a consumer shopping experience or to quickly identify and prevent criminal activity. Needless to say, humans are prone to error.