About . . . . . . Classes . . . . . . Books . . . . . . Vita . . . . . . . Links. . . . . . Blog

by Peter Moskos

January 13, 2015

From the [not so] sharp minds at ProPublica

I've written before about their foolish and inaccurate claim that the black-to-white racial disparity among those shot by police is 21 to 1. I said, given the group they look at, the number is 9 to 1. But without any slight-of-hand or misleading highlighting of statistical outliers, the actually black-to-white racial disparity, the take-home stat, is 4 to 1.

More than two months passed. The inaccurate 21-to-1 figure was bandied about by the NPR, the New York Times, and The Economist.

Then, on a quiet Christmas Eve, ProPublica's Ryan Gabrielson and Ryann Grochowski Jones posted an article to address criticism (mainly brought by me and David Klinger) of their initial study.

I don't want to waste much more time on this; I've wasted too much already (see 1, 2, 3, and 4). But I do find it funny, in their piece, after many paragraphs focusing on the red herring non-issue of hispanic undercount, there it is -- buried in the 11th paragraph -- they kind of admit I'm right: the ratio might be 9 to 1!

Maybe I should just stop there and say, "you're welcome."


But, but, I can't! Because then there it is -- a revisionist gem -- they say the actual number doesn't really matter: "And whether 9 times as great, 17 times or 21 times, the racial disparity remains vast, and demands deeper investigation."

What the fuck?!

The 21-times ratio is the only real point of your original article (which is still up and unapologetic)! And the only real point of my bitching was that 21-times is wrong. Now even 4:1 or 9:1 may be too large. And it does demand deeper investigation. So why not investigate deeper (Or at least crib from those who have)?

According to ProPublica: "the data is far too limited to point to a cause for the disparity." Actually, no. The disparity can be explained pretty well, without too much "deep" digging. What I'm about to tell isn't the "deepest" investigation, mind you, but it's a start. And it's on me, guys. Gratis.

The black to white racial disparity (all ages) of those killed by cops since 2000 (and reported to the UCR, which is big caveat) is 4 to 1. The racial disparity among those who kill cops is 5 to 1 (the rate is per capita, mind you, not the absolute number). I'd bet $20 it holds for teens, too.

Now one could say, as does Prof. Klinger, that the data on police-involved homicides are simply too limited to make any point at all. But if one is willing to play with bad data (and I'm game, if they're the best we got), then you can't say your conclusion is fine but... other conclusions? ...well, "the data is far too limited."

Finally -- and it goes back to my point about outliers and cherry-picked bullshit data -- ProPublica has the chutzpah to say they can't go back further in time -- thus including more data, increasing statistical validity, and decreasing the magnitude of their conclusion -- because, get this: they can't get accurate population numbers.

So let me get this right: they're fine using fucked-up UCR data on justified police-involved homicide, they're fine cherry picking an outlier three-year sample with an "n" (total cases) of 62, but they wouldn't dare look at more years because we can't estimate the US population between 2001 and 2007? Are they on crack? Are they stupid? Or are they simply blinded by ideologically bias. I honestly do not know. But it's a nonsensical line of statistical integrity for them to draw.

Here is it in their words:
Using Census 2000 and Census 2010 data for baselines assumes that the ratio of populations remain static, and that a snapshot of population rates for a subset of time can be assumed to be accurate for an entire period. We know that's not true.... To test the critics' argument, we calculated risk ratios for as far back as the American Community Survey data goes (2008) [ed note: the ACS actually goes back to 2005, but whatever]. From 2006 to 2008, the risk ratio was 9.1 to 1 (with a 95 percent confidence interval 6.19, 13.39).
First of all, stop the fancy talk about "risk ratio" and "confidence interval." You either don't know what you're talking about or you're knowingly trying to mislead.

Speaking above your reader's head is a dirty rhetorical trick to hoodwink gentle reades into trusting your statistical acumen (which is pretty crappy). As my grand pappy used to say, "Ain't no need to use a 25-cent word when a 5-cent one will do." (See, now I'm usin' the reverse rhetorical trick by affectin' an aww-sucks-I'm-just-a-common-guy style of speech here.) For what it's worth, my papou was an immigrant who spoke with a Greek accent.

"Risk ratio" here means nothing more than "more likely." "Confidence interval," well, if you're going to use it, explain it. Better yet, explain it accurately* or at least point out that it supports 9:1 more than 21:1.

More to the point, it's pointless to discuss statistical nuances of irrelevancy! Of all the problems in your analysis, you're going to draw the line at estimating population in Census off-years? Really?! It's like we're sitting in your rusted jalopy and you tell me you can't drive me home because the windshield wipers aren't working. But you failed to mention the fact that the engine is broken!

Of course we can estimate population figures, you fools! The US population grew 9.7% between 2000 and 2010. Talk about easy math! Go on, be bold, you dirty devil: assume a linear population growth for all categories. Divide 10% by 10. It comes out to 1% a year. I know it's not perfect, but it'll be close enough; trust me. (Actually population growth of 9.7% over 10-years comes out 0.925% compounded continuously.)

Will this population estimate be perfect? No. Is it good enough? Yes. Will it tell you far more about what you claim to show? Of course. Is that why you won't do it? Probably. Would this population estimate be the single most accurate number in your entire analysis? Abso-fucking-lutely.

14 comments:

Adam said...

Any chance you'll publish something about this, so a wider audience can hear the truth about these statistics? The "cops are gunning for blacks" narrative just grows and grows. I saw this whopper in the New York Times today, from an interview with Professor Judith Butler:

"[T]he point is not just that black lives can be disposed of so easily: they are targeted and hunted by a police force that is becoming increasingly emboldened to wage its race war by every grand jury decision that ratifies the point of view of state violence."

Peter Moskos said...

"Comparative literature and critical theory at the University of California, Berkeley"? Talk about a police expert.

It's like all those white poetry grad students who attacked the cops on the Brooklyn Bridge. I think they're best ignored (until they attack cops, after which they are hunted down).

As to a larger audience, I can't really compete with ProPublica, but it's out there when people care, hopefully to come up with google.

bacchys said...

While I think there are issues with police uses of force that need to be addressed, these efforts to highlight a racial problem based on (incomplete) absolute numbers of those shot by police are wrong on many levels.

First, the data is far from being enough to draw any conclusions.

Second, it ignores that in at least some of these shootings the officer's use of force was justified. It's the height of arrogance to insist the cop who justifiably used deadly force (say, because the perp fired at him) did so only because of the suspect's race. But there's no effort in these claims to separate the wheat from the chaff.

Peter Moskos said...

None at all.

And it's especially scandalous since we know the vast majority of police-involved shootings are justified.

Adam said...

The more thoughtful police critics will not harp on the number of blacks killed by police every year. Rather, they'll argue that when it comes to killings of unarmed, innocent people, blacks are more likely to be victims than whites. For starters, I think that may be true, and yet it may not reflect any racial bias on the part of police officers. Because there's a lot more crime in black neighborhoods, there are lot more cops, and therefore blacks, in general, are more likely than whites to have encounters with police (good encounters and disastrous ones). Second, I've never heard anything other than anecdotal evidence to support their argument. "Look at all these high profile cases of innocent, unarmed black people being killed by cops! What more do you need!?" Well, some statistics would be nice. Has anyone tried to study that narrower issue? I wonder if, even accounting for the higher presence of police in black neighborhoods, cops really are more likely to make tragic miscalculations with black subjects than with whites. It's possible, though experimental evidence suggests otherwise. (I often point to this study. Also see these New York Times articles from 2007 and 2014.

Jay Livingston said...

Much of the JJPSP article is about whether cops' responses are different from those of non-cops. But the important question is whether cops respond differently to Blacks than to Whites.Reading quickly through it:

"Again, consistent with previous findings, participants shot armed targets more quickly when they were Black, rather than White. . . and they indicated don’t shoot in response to unarmed targets more quickly when they were White, rather than Black. . . .. These simple effects did not depend on sample [i.e., cops were no different from non-cops], and both of the simple target race effects within object type were significant for each of the three samples."

Peter Moskos said...

Jay (or anybody), can you interpret those statistics for me? How much more likely or more quickly were police shooting blacks? I'm trying to parse the substantively significant data from the statistically significant findings.

A lot of the study seems to show bias in the community is much less present in police. "But this bias [shooting blacks quicker, I think] was weaker, or even nonexistent, for the officers." (p 1015, discussion). And in the conclusion: "the officers’ ultimate decisions about whether or not to shoot are less susceptible to racial bias than are the decisions of community members. ... officers’ training and/or expertise may improve
their overall performance ... and decrease racial bias in decision outcomes."

But what does that mean? What's the bottom line? I hate to end something with, "more research is needed."

Peter Moskos said...

One of the Times pieces (2007) says racial bias to the amount of, "10 to 20 milliseconds longer to make a decision." It also says "But when it mattered — pull the trigger or not — the police officers tuned out race."

And that Michael Wines piece is good. He's a good reporter.

Jay Livingston said...

Peter, I don't have time to read the whole article closely. I'll just repeat what I said before: with respect to the recent shootings, it's not very relevant that police responses might be less affected by bias than are the responses of non-police. The question is whether police responses are affected by bias, and if so, how much.

Peter Moskos said...

Agreed. I just don't see any real evidence that shows such a bias.

Adam said...

Jay, thanks for the clarification about that study. I don't have time to read it closely either, but I wonder if the findings might be more relevant than you're suggesting. Cops may be more quick to shoot *armed* black subjects than armed white subjects, but isn't the most important question whether they're quicker to shoot *unarmed* black subjects than unarmed white subjects? The 2007 NYC article, summarizing the report, says "[police] shot at about 13 percent of the unarmed black men and roughly the same number of the unarmed white men. By contrast, the civilians shot at about 35 percent of the unarmed black men and 29 percent of unarmed white men."

Also, the 2014 NYT article describes a recent Washington State University study which had some surprising results:

"Whether officers, veterans or civilians, the subjects consistently hesitated longer before firing at black suspects and were much more likely to mistakenly shoot an unarmed white suspect, the researchers found. And when they failed to fire at an armed suspect — a potentially fatal mistake — the suspect was about five times more likely to be black than white. The study’s 36 police officers were the lone exception in failing to fire: The suspect’s race wasn’t a factor in their decision not to shoot."

Anthony said...

I want to love the WSU study. I want to buy it dinner, walk through the city with it, and kiss it on the mouth at the end of the night.

The problem is, that study was completed with a FATS machine. And as useful as the FATS machine is in training, behavioral study is a whole other animal.

Rob S. said...

Hi Peter,

I saw you on the Glenn Show. It's good to find allies amid all the hyperbole on this issue. Your blog is a great resource, and I look forward to reading your books.

The ProPublica study has been pissing me off for a long time. It is so careless and misleading. While there are many problems with it, I want to focus on one. Some people, researchers included, seem hesitant to assume that the proportion of deadly force used against a particular group should be nearly equal to the proportion of violent crime committed by that group. Another way of saying it is that for any particular comparison of groups we'd expect the risk ratio of being killed by police to be equal to the risk ratio of committing murder. A group's homicide rate seems like a good proxy for other violent, aggressive, and risky behavior that is likely to get one killed by the police. I don't know why we'd assume otherwise.

Do readers agree with my assumption?

One more thing. I'm no statistician, but I do think it is okay, maybe even desirable, to calculate confidence intervals even when you have all the population data. Google superpopulation. The basic idea as I understand it is that you imagine the actual population to be theoretically infinite, a superpopulation. So even if you have all the data from the actual population, you understand it to be a sample from the infinite superpopulation. When thinking about it like this, it make sense to calculate confidence intervals. So for example, say our population of interest is 8th graders in Ohio in 2014. The superpopulation would be all possible 8th graders in Ohio in 2014, and the real population is thought of as a sample from the imagined superpopulation. It's far out and mind blowing, I know. This all said, I have no idea if the researchers were thinking of it this way and calculated their intervals correctly.

Concerned citizen said...



The 21-1 stat seems to be widely accepted. It was cited today in a NYTimes op-ed piece, "A Better Standard for the Use of Deadly Force."

Uggh.