About . . . . . . Classes . . . . . . Books . . . . . . Vita . . . . . . . Links. . . . . . Blog

by Peter Moskos

February 12, 2017

"A Bird's Eye View of Civilians Killed by Police in 2015"

More on the article in Criminology & Public Policy by Nix, Campbell, Byers, and Alpert. My previous post pointed out that if you use 2016 data rather than 2015 data, their conclusions would totally change.

[Update: also see Nick Selby's take on this. And David Klinger's]

How do we get data on police-involved shootings?

Trick question. We don't! A few departments, like the NYPD, issue great annual reports on shots fired by police. But other than that, we don't know. We don't know how many people cops shoot. So at best we're left with those shot and killed by police. And that's probably less than half of those shot by police.

When academics call for "more study," it's usually a cliché. But the need here is real. We don't know how many Americans get shot police each year? Are you effing kidding me?!

Given that, there's nothing wrong with using the best data you have. And I'm partial to using the Washington Post data myself. But that doesn't mean the data are good. (By good I mean valid, in that they show what they claim to show.) (I also have a bias problem with their "ticking" counter, like last year's shootings numbers are still going up. No, dude, every time I click on 2016 data, it's going to go to 963. You're not actually compiling the data on the spot.)

1st question: Is the basic number of people shot and killing by police correct?

Answer: Probably. It's an unknown unknown, but we have a lot of reasons to think most killings are here.

2nd question: Is their coding correct.

Answer: Depends on what you want. For race, probably. For threat, probably not. The data might be "reliable" (you might get the same code if you did it again). But what does "threat" labeled "other" mean? And how is that different from "undetermined"?

Others have pointed out to me that reporters don't have the expertise to judge what experienced police officers are trained to see. There's a great deal of truth to that. But more importantly, is the Post categorization valid? We don't know.

Say somebody gets killed on the street. How does that data get to us?

Well, in the traditional manner -- going from the street to the Uniform Crime Report (UCR) -- usually somebody calls 911 cause a crime happened. Some young officer shows up and takes a report. This is a local form, for a local department, not at all coded to the standards requested by the UCR (Hispanic data is a key issue here). The cop writes a report that is collected by their sergeant toward the end of their shift. It well enough written, so it goes up to some supervisor and then to some police data consolidator and then, once a year to the FBI.

At each stage it might get "cleaned up" a bit, as needed. And then, 9 to 21 months after the incident occurred, it gets published in the UCR index or Part I (or II) crimes. I've actually been able to check individual incidents I handled, later, in the UCR data. It checked out. All the facts were basically correct.

But you only know what the UCR tells you, and it isn't much. Nevertheless the UCR is considered the "gold standard" of crime data. But it sure ain't perfect. And it's particularly bad when it comes to police-involved shootings. Mostly because most departments simply do not report data on those killed by police.

Because of that, after Michael Brown in Ferguson, the Washington Post (and to a worse extent the British Guardian) said "we're going to start counting." Good on them, because nobody else was. [As was pointed out to, and I should have mentioned, killedbypolice was doing it first.] They use whatever they can, which means google searches of news accounts, basically.

So a cops (or criminal) shoots somebody. Some local reporter (most likely) with a police scanner goes to the scene and files a report. People don't get killed by police that often. It bleeds, so it leads.

That reporter either does or does not do a good job. They gather some of the information that seems relevant. But since they weren't there, they don't really know happened. It's called an investigation. Who do you believe? The cops say the guy was armed; his family says he wasn't. Reporters file a story and then the Washington Post has to decide if the guy was armed. Usually (for good reason) they go with the cop's version. But what if the cop is lying? Isn't the crux of the matter? Even if it doesn't happen much, how would we know? Of course high-profile cases get more investigation.

Which system is better? Neither. Both. It depends. But no existing data gather system is universal, mandatory, or really gets to the context of the incident.

But then even more, there's the subjective recording of data.

Miscoding threat level

The Washington Post labels a threat as either "attack", "other," or "undetermined." That's an odd trichotomy. Police care if a shooting is "justified" or not (aka "good" or "bad"). Courts care if it was criminals or not. The public may care if it were "necessary" or not. These are all different standards. But how can tell 3rd hand if a shooting was "good"?

The article's authors equate "other" with non-attack. This is wrong.

Take Paul Alfred Eugene Johnson, who robbed a bank with replica guns.
He forced the bank employees into the vault at gunpoint, told them he would kill them if they called police, and stole cash, police said shortly after the robbery.
Surveillance images from both robberies show someone dressed in similar-looking white hooded sweatshirts and carrying guns in their left hands.
There was a crazy chase. Johnson got out of his car and officers opened fire. I wasn't there, but I'm willing to call this a justifiable shooting. The threat level in the data is coded as "other," but in the journal article this gets recoded to "non-attack"? Come on, now.

Kevin Allen charged at officers with a knife. Kaleb Alexander had a gun he wouldn't drop. Troy Francis chased his wife and roommate with a knife, and then charged at responding officers. Hashim Abdul-Rasheed, previously not guilty by reason of insanity in an attempted murder case, tried to stab a Columbus, Ohio, police officer and was then shot and killed. Markell Atikins was wanted for the death of a 1-year-old, and then threatened officers with a knife. Tyrone Holman threatened to kill officers with a rifle and a grenade. Joseph Tassinari told an officer he was armed (he was) and then reached for his waistband. Harrison Lambert threatened his father with a knife before officers responded.

What do all these cases have in common (along with mental illness in most of them)? They're all categorized as "other" in the threat department. I don't fault the Washington Post for how they categorize. They may not have proof of attack beyond and officer's (self-justifying) account. I wish they did better, but they do what they need to do. (And nobody is doing better.) I do fault others who then group all these "others" into "non-attack" (n = 212), implying the cops did wrong.

I'm more curious about the label of threat called by the Post: "undetermined" (n = 44). Many of the potentially worst shootings are in this category. And yet: "Cases involving an undetermined threat level were excluded from multivariate regression models." I'm not certain why. Couldn't you go one by one and look at them? Isn't that what researchers do? I looked at a few.

The Post says Robert Leon:
exchanged gunfire with police, stole another car at gunpoint and fled. was first accused of shooting at cops and then shooting himself.
This account seems simply to be not true. Further investigation may have revealed that Leon didn't have a gun and died from police bullets. I wasn't there. I don't know. But it sure seems like an odd one to me.

The "unarmed" issue

If you're looking for bad shootings, "unarmed" sure seems like a good place to start. But it's not enough. "Unarmed" is a flag, but it is no guarantee that a suspect isn't a lethal threat. Officers have and will be attacked and killed by "unarmed" suspects.

Some of these cases, like white officer Stephen Rankin killing unarmed black William Chapman, resulted in the officer's criminal conviction. The Washington Post codes Chapman as attacking the officer. The jury may not have thought so.

The problem here, one the researchers seemed to have, is that if you look at "unarmed" suspects and those categorized as "non-attack" (the ones that people are most concerned about) you don't have a large enough n (number of cases) to do statistical analysis.

In 2015, you'd be down to a grand total of 50 people shot and killed by cops. It's enough for an outrage of the week, but you can't do much data analysis with 50 cases. And if you were to use "undermined" rather than "other" as meaning "non-attack" (I think a better but still horribly flawed categorization) you'd be down to a total of 9 cases.


leintel said...

Obviously, great post, etc., etc. One nit to pick: You say, " after Michael Brown in Ferguson, the Washington Post (and to a worse extent the British Guardian) said "we're going to start counting." Good on them, because nobody else was."

Well, as much as Messrs Clark and Swain at the Post and Guardian, respectively, would love that to be the case, it's wrong. There was someone else already counting - in fact they both based their initial counts and work on his: D. Brian Burghart had been running Fatal Encounters for years when the Post and Guardian started their counts.

Avi said...

Everyone always looks at the absolute number of (unjustified) police killings. But isn't it relevant to look at that number in the larger picture?

According to the FBI statistics I found, there were 11,205,833 arrests in 2014 (https://ucr.fbi.gov/crime-in-the-u.s/2014/crime-in-the-u.s.-2014) and 10,797,088 arrests in 2015 (https://ucr.fbi.gov/crime-in-the-u.s/2015/crime-in-the-u.s.-2015). Around fifty (unjustified, or 950 if looking at all) deaths out of 10 million potentially violent interactions is so small - just .0005% - that I suspect in any other study it wouldn't even register as a blip. Even bringing that number up to 1,000 deaths is just .01%.

Don't you think this is an important factor to consider?

Avi said...

To clarify further what I meant in the previous comment:

Obviously, any number of unjustified killings are a problem which needs to be addressed. But a problem with a frequency of .0005% is hardly one to raise a hue and cry over like we see has happened with police shootings. In fact, the outcry isn't even over the total number of deaths - it's over the racial disparity within that number, and when one looks at that number in relation to the total number of arrests in each race, the difference is so tiny that it's not even worth mentioning. Here's how the numbers break down for 2015:

According to the Washington Post data for 2015 there were 43 unarmed (or weapon unknown) killings of white suspects. There were 45 unarmed (or weapon unknown) killings of black suspects.

According to the FBI statistics for 2015 there were 5,753,212 white arrests and 2,197,140 black arrests.

43/5,753,212 = .000747%
45/2,197,140 = .00204%

I just don't see how such a minuscule difference in racial groups can be considered relevant enough to say that it indicates anything of any significance.

Peter Moskos said...

Leintel. You are right. And I should have known better. I actually *did* know because. I've used their data... as early as April 2015 (see: http://www.copinthehood.com/2015/04/killed-by-police-1.html).

My problem with that data (which does not mean it didn't exist) is that it was very messy. Hard to use. It's poorly laid out, with different variables combined into one column. And I think the Washington Post improved on accuracy. But indeed, it started at killedbypolice.net.

I'll correct the reference in the post.

Peter Moskos said...

Avi, I agree. The small number *is* important. I think people's focus on racial disparity in shootings is misplaced in terms of priorities. We have bigger fish to fry. This a big country. We're talking a few dozen bad shootings a year. Maybe a few hundred that are legally and even morally justified... but didn't have to happen. And a racial disparity that might be measured in single digits, if at all. We could spend a lot of time and effort to reduce those numbers. Or we could focus on problems that affect and could save thousands of lives. I'm just afraid we can't do both.

Avi said...

> I think people's focus on racial disparity in shootings is misplaced in terms of priorities.

Yes, I agree. But I'm arguing that it's misplaced, not just in terms of priorities, but in terms of actual statistical relevance. A difference of 0.001293% in unjustified killings between racial groups seems to be so irrelevant it isn't even worth mentioning.

PS - I've been a follower of your blog for around a year now. Thank you very much for all the good work and solid reporting on the topic that you provide.

Peter Moskos said...

I think the bigger problem is some people only care about racial disparities and not the greater issues of lives. What if the choice were between A) eliminating racial bias in police-involved shootings or B) reducing all police-involved shooting by 1/3, but maintaining some racial bias. Reasonable people could differ, of course, but I'd go with B. Especially since we don't see any evidence supporting implicit bias in the 2016 data (at least with the methods used by the paper supporting the concept with 2015 data).

But along with the above being perhaps being a false choice, it's almost irrelevant to consider. Careers have been built on the concept and important of implicit bias. Programs have been funded. And I'm not certain it does any good. That said, what's the counter position? Nobody I know is *for* implicit bias. It's just such a slippery concept, especially since we don't have good ways of even measuring its existence, much less any proven effective solutions. I'd be more for taking an end-run around implicit bias to make the world a better place regardless of people's subconscious (and even conscious) bias.

Avi said...

Well said.

Peter Moskos said...


You know, I've always wondered who was behind killedbypolice.net. I guess I didn't look too hard (though I don't think D. Brian was public with his identity at first).

But what I found funny is that just by looking at the data, I knew he/they/whoever was behind it, was from out west! It's funny what data -- the dry data -- can tell you, if you're listening. There were some coding clues, like issues with NYC data regarding counties, if I remember correctly.

Nick Leffel said...

Do you believe many of these cases of police shooting unarmed people are justified? Or not? I am a criminal justice major and most of the cases you see on television where a white cop kills an unarmed black man, looks justifiable to me. Many times, the suspect does not follow orders and the officer has probable cause to think he is dangerous.

Liberaltarian . . . said...

Everyone always looks at the absolute number of (felonious) police being killed. But isn't it relevant to look at that number in the larger picture?

According to the FBI statistics I found, there were 11,205,833 arrests in 2014 (https://ucr.fbi.gov/crime-in-the-u.s/2014/crime-in-the-u.s.-2014) and 10,797,088 arrests in 2015 (https://ucr.fbi.gov/crime-in-the-u.s/2015/crime-in-the-u.s.-2015). Around fifty (felonious) deaths out of 10 million potentially violent interactions is so small - just .0005% - that I suspect in any other study it wouldn't even register as a blip. Even bringing that number up to 1,000 deaths is just .01%.

Don't you think this is an important factor to consider?

Peter Moskos said...

I don't know if arrests are the right denominator. I would use number of cops on the street. But either way, it's still barely a blip. Yes, it's important for cops not to be too paranoid. That said, concern over cops getting killed hasn't become a major social movement (yet), and one with potentially serious and negative consequences for public safety.

John Bradford said...

Hi Peter, this is a very interesting analysis. I would just like to make you and your readers aware of a data visualization tool I am developing of police use of force resulting in civilian deaths. You can easily switch between WaPo, Guardian, Killed by Police, PKIC, USPSD and the recent data provided by Lott & Moody (2016). You can filter by year, state, or any value of any variable provided by the specific dataset. You can benchmark by population, including cross-tabulations of race, age, and gender. Finally, the app enables you to benchmark by arrests - from the UCR, for several categories of arrest. You might find it useful. Any comments and suggestions are appreciated! (I accidentally deleted this post earlier)

Views from Ashton said...

We have to hold the media accountable. They play on people's hate, fear, and prejudices. CNN, Fox, etc. are trying to pit the population against each other. The only way we will ever overcome is if we actually have a sit down and talk about the things on our mind face to face instead of through statistics and facebook.

leintel said...

Dr. Bradford,
Very nice work on your visualization tool. Thank you!

leintel said...

Dr. Bradford,
I must point out that all you are grabbing is demographics from these databases - what about drugs? I see you have "threat level" but what about (from PKIC) whether the decedent had committed assault prior to arrival of the police? Or whether the call was self-dispatched or requested by the community. I feel, as a researcher who has spent the last two years working on this problem, that researchers are so blinkered by race that they are not looking at other aspects, such as WHO CALLED THE POLICE. The obsession with race and "implicit bias" to the exclusion of everything else has just got to give way to an exploration of non bias causes - yes, I understand I am speaking as a heretic. But isn't it possible that in these deeply dynamic, inter-human relationships, in which human beings challenge each other with deadly force, and confront one another's fight-or-flight and survival instincts, SOMETHING other than race comes up?

John Bradford said...

Leintel, thank you for your comments. I hope to not distract too much from the interesting discussion here with a discussion of my data visualization app, but I don't disagree at all with what you've written. I've only scratched the surface with the variables available in both the PKIC and Lott & Moody - which also provide demographic information of the police. My inclusion of population and arrest benchmarks for sex and age categories makes it possible to see other large demographic gaps, such as the rate at which men are killed compared to women, for example, which intuitively problematizes the 'gaps must mean bias' assumption that pervades media.

The goals I had in mind for the interactive data visualization tool are modest. My initial aim and unique contribution was simply to make available population *and arrest* benchmarks, which I hadn't seen produced elsewhere. My implicit audience has primarily been educators since professional researchers no doubt already have the available data. Most of my time has been spent figuring out the code, specifically fast ways to organize and manipulate population and arrest data and the aesthetics of the bar plots - not to settle substantive questions raised by these data. I do intend to include the full range of variables available in the PKIC and Lott & Moody data in the near future. I'd appreciate any further comments or suggestions you may have.

I am deeply skeptical of most of the kinds of inferences researchers have drawn from this kind of data - consisting of small sample sizes involving complex human interactions that are described and categorized in the most superficial manner imaginable - for all of the reasons mentioned on this blog. Published papers are often deceptively conclusive. Even working with the superficial data as is, I've run literally hundreds of different regressions with different model specifications including and excluding different variables, which of course yield different results. Unfortunately, academics can't publish papers saying the data are mostly noise - at least, it's inordinately more difficult to do so - and hypotheses like implicit bias, while problematic as actual explanations of human action or outcomes of interactions are easy ways to frame demographic discrepancies that fit into the prevailing zeitgeist. It's also where the money is.

Peter Moskos said...

It's like to be the first to coin the phrase: "Big Implicit Bias." Not to be confused with "a big implicit bias."