Some CES Respondents Have Illogical Opinions on Police Funding. Blame Binary Questions and Boredom

by Scott Blatte (Class of ‘23)

Discussion of this year’s elections have focused on eye-opening results for the governor’s office in both New Jersey and Virginia, and rightfully so. But there were other elections happening across the country, including referendums in Minneapolis and Austin on reforming their police departments. The Minneapolis initiative would have disbanded the force in favor of a reformed “public safety” department while the referendum in Austin proposed hiring hundreds of new police officers to combat rising crime. The result? Both failed, indicative of a broader national trend: calls for reform have proliferated while actual reforms have stalled. In this blog post, I explore what the 2020 Cooperative Election Study (CES) can and can’t tell us as to why that might be, specifically looking at support for police funding.

The CES pre-election wave asked a handful of questions on policing, including two about the funding of police. The first asked whether a respondent agreed or disagreed with the statement, “Increase the number of police on the street by 10 percent, even if it means fewer funds for other public services.” The second question was asked immediately below the first, and likewise asked whether the respondent agreed or disagreed with the statement, “Decrease the number of police on the street by 10 percent, and increase funding for other public services.” There have been myriad survey questions asking about funding and police reform more broadly with mixed results. So, what makes the CES data so unique? The fact that it effectively asked the same two questions back-to-back. Wording bias, structure, and the specific proposals all essentially stayed the same, with the only change coming in the question premise: the first proposed increasing funding while the second proposed decreasing funding. Essentially, the CES asked two equal but opposite questions. Thus, one would expect to see roughly equal but opposite results. Do we?

Before I answer, I first want to touch on the previous iterations of these questions; specifically, the only one, a June 2020 Harvard poll. The results are somewhat consistent with what we might expect: most respondents supported increasing police funding while a minority supported decreasing it. Ok, reasonable enough. But when we dive into the magnitudes, something strange pops up: only 51% supported increasing police funding while 62% opposed decreasing it. While the topline results are equal and opposite, the magnitudes are off by 11% despite these questions being asked back-to-back.

This is exactly what happened with the CES. Both questions had a minority of respondents say they support the proposed change to funding. In fact, as the two graphs below show, the percentages between the two polls are actually very consistent: the question asking about increased police funding only saw a 2% difference from June to October and the decrease in police funding question was virtually identical. But, as the dashed line highlights, that 2% swing was enough to bring the percentage in support under 50%, thereby swinging the topline conclusion from a majority in support to a majority in opposition.

There are a few important points to focus on. First, the consistency between these two results is a welcome sign and s a much-needed win for the polls; they have had a tough go of things with their bad miss in New Jersey this month. Second, despite 4+ months of time passing between these two surveys, public opinion barely budged. Given how salient policing was as an issue during that time frame, that is a fascinating result, and one that requires further study. But more importantly than the prior two observations is the fact that the topline data are puzzling; despite the two questions asking exactly opposite things, both yielded the same result: a majority opposed. So, to make sense of it, I dived into the crosstabs. Specifically, I looked at these two questions together, categorizing people based on their responses to both questions. A visualization of those results is attached below.

Rather than try to dissect the questions individually, I think this cross tabulation is a more effective way to understand what the CES is telling us. The first two columns are respondents who gave a common-sense answer: they consistently supported more police funds and opposed less, or vice-versa. Already, we get a more intuitive result: a plurality, not a majority, support increasing police funding, by a margin of about 11% (43% for an increase, 32% for a decrease).

The third bar is still common-sense, but a little more nuanced: these respondents opposed both. Importantly, both questions are binaries: you can either agree or disagree. The responses between the two are thus not collectively exhaustive. In other words, if you like the status quo, how do you indicate your support? Realistically, you don’t; you can only indicate your opposition to change. This provides a strong theoretical basis for classifying those who oppose both as people preferring the status quo. But a second cross tabulation against a post-election policing question, one in which respondents were essentially asked to express their support for increasing the state legislator’s funding of police using a Likert scale, yields some interesting data: about half prefer the status quo, but another half switched their answer. That data can once again be seen in the graph below.

So, is the “stay the same” category perfectly accurate? No. Roughly half of respondents offered an alternative answer when given a second opportunity. But, at the very least, it provides a rough approximation of what many of these voters might have been thinking and is still an improvement over the surface-level analysis. It also highlights another area that warrants exploration in future work.

With that, we now have a more logical — albeit still imprecise — picture of support for increasing police funding. About 43% are supportive of more police funding, 32% are in favor of less, and 19% aren’t fans of either.

Last but not least is the “illogical” or Contradict group, coming at roughly 6%. How do you have the good fortune of being illogical? Simply agree with both statements. Yes, 6% of the sample want to increase funding for police by 10% and… simultaneously decrease funding for police by 10%. Now, there’s ample evidence that Americans may express contradictory opinions when influenced by a host of factors, including question bias, partisan cues, and selective responding. But that evidence does not explain the phenomenon at play here: these were two questions with near-perfect parallel structure, asked on the same survey page and with a question bias that should preclude contradictory answers. And yet, we still have 6% of the sample giving illogical answers.

My preliminary analysis identifies a likely culprit: inattentiveness. This is a fairly intuitive possibility: people who are not paying attention will pick answers without considering their meaning, thus opening the door to contradictions. To demonstrate that, comparing response times would reveal whether that intuition is empirically supported. In a normal survey, showing any conclusive evidence to that point is next to impossible: a typical sample size of 1,000 simply does not allow for conclusions about 6% of the sample. The beauty of the CES, however, is that 6% of the (weighted) sample is still 3,000 people — sufficiently large for statistical analysis.. Moreover, the CES automatically records the time taken on each page of the survey, thus allowing for analysis on specific questions.

Given the wealth of data, I sought to test whether respondents were paying attention by using response time as a proxy variable for inattentiveness. The results are striking: those with contradictory answers to questions on policing take, on average, 20 seconds less to respond to the page containing the policing questions.

Graphically, the difference is obvious. Statistically, the results are confirmed: the test statistic for a two-sample difference of trimmed means, was over 26. Of course, this was not a controlled experiment, nor did I implement any controls. Nevertheless, the significant differences hint at what I expect is roughly 6% of respondent’s day-dreaming their way through the survey. And, if 6% of the survey is giving illogical responses and is not removed, then you’ve got a big problem.

I plan to continue to look at this phenomenon in future work, both to isolate it more concretely by analyzing these people’s other responses to different components of the survey as well as to try and identify who these people are (and how we, for lack of a better word and because I loved Loki, can prune them). But this preliminary analysis shows that even on a high quality panel like YouGov’s, inattentiveness can be a problem and may have a non-negligible effect on overall results.



The Tufts Public Opinion Lab (TPOL) is dedicated to studying contemporary controversies in American public opinion using quantitative data analysis.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tufts Public Opinion Lab

The Tufts Public Opinion Lab (TPOL) is dedicated to studying contemporary controversies in American public opinion using quantitative data analysis.