Return to Election Data Dispatches
Charles Stewart is the Kenan Sahin distinguished professor of political science at the Massachusetts Institute of Technology. He has collaborated with Pew since 2010 on the Elections Performance Index.
It was updated on April 8, 2014, with data from the 2012 election. Stewart, a leader in the analysis of the performance of election systems, also advised the Presidential Commission on Election Administration. He is the co-editor with Barry Burden of the forthcoming book The Measure of American Elections, a collection of political science papers that informed the Elections Performance Index, which will be published in late 2014 by Cambridge University Press.
Q: Why did you choose to work with Pew on developing the Elections Performance Index, and who is the audience for this research?
A: I had worked with Pew for several years, and knew that the people and organization were committed to evidence-based reforms in the election arena. I had written about the lack of strong evidence for whether changes passed on the heels of the 2000 presidential election had actually made a difference, but implementing ideas inspired by Heather Gerken’s Democracy Index from 2009 seemed like a big project, so working together made sense for both of us.
The index has various audiences, from election officials to policymakers to the attentive public. Election officials can get a sense of where they stand compared with the past and with neighboring states and hopefully use the information to advance better laws and resources. For other officials, it provides an independent view of how election administration issues are playing out in their state. And for the public, it serves as a report card showing where performance is excelling and where it needs improvement.
Q: Measuring elections performance is challenging. How did you work with Pew and the advisory group to conceptualize this project? How did data and data availability play into it?
A: It helped that we had examples to draw from. The Democracy Index was helpful, because it framed voting performance in terms of convenience—to register and to vote—and accuracy, in terms of vote counting. We ended up thinking about measuring election performance along two dimensions—convenience and integrity—and then within these examining the three major components of the elections process—registering, voting, and counting.
In addition to thinking about the idea theoretically, we also thought about it practically, particularly in terms of the kind of data we were going to use. There was no one right answer, other than the data needed to be of as high a quality as possible. We decided that we would focus on areas in which data were available for all states, or virtually all states. That ended up ruling out a lot of potential indicators because states either did not measure some things, or they measured them differently.
Q: What did the advisory committee add to the index?
A: I think one thing the advisory committee did was to provide credibility for the index within each community. We have a collection of some of the nation’s most distinguished election researchers and administrators who will vouch for the index within their professional circles.
The advisory committee also provided a reality check against the data. When we began the project, there was concern about the quality of some of the data, especially what came out of the Election Administration and Voting Survey. Committee members who worked with the data gave us confidence that we could move ahead more quickly than we had initially planned.
Q: The advisory group considered many more than the 17 indicators; how did you work with them to decide which ones to use? How do you expect this to change as the project continues?
A: We had a very disciplined way of looking at the indicators. Naturally, at the beginning everyone was enthusiastic, throwing out ideas of things that might appear in an index. The large group of what we called the “candidate indicators” probably numbered almost 40.
The next step was to ask a series of questions about each candidate indicator: Does it meet the social science principles of validity and reliability? Was it likely to continue to be measured in the future? Do all states measure it the same way? Is it possible to improve on the indicator through administrative or legislative activity, or are values subject to factors outside policymaker control? We called this the “luck vs. skill question.” Whenever possible, we favored skill over luck.
The committee has also helped screen new ideas for inclusion in future editions. The Pew team and I have begun work to build a set of strong indicators about integrity, and the committee is already providing the same sort of rigorous advice they gave us on the original concepts.
Q: What were the outcomes you wanted out of the Elections Performance Index when you started? How has the project met your expectations?
A: One outcome I had hoped for was that we would begin to build a larger community of academics and election officials who were dedicated to working together to understand and improve elections, based on evidence and scientific approaches. I think that’s happened. I now know more state and local election officials who are interested in applying rigorous methods to their work, and I get invited to talk to them about this more.
I also see that there is a growing group of academics who view the elections field as one in which they can perform a public service by applying their skills in data analysis to improving elections. Assuming that both communities continue to grow, I feel comfortable tracing that growth to the index project.
However, strong forces exist in the field of election administration that don’t want to learn from a fact-based approach. That said, when we went about getting the data together for the second version of the index, several states said to us that they were taking this very seriously, and were eager to see their ratings improve. The index is on people’s minds now. That’s a good sign.
Lastly, a promising sign is the evidence-based approach of the Presidential Commission on Election Administration. The commissioners understood that to navigate the partisanship in this area they needed to ground their findings in the best research and to avoid making statements when there wasn’t a scientific consensus.
Additionally, the report cites several people who were on the advisory committee—often because they were invited to testify before the commission. The report has been very well received, and though I don’t think the index can claim credit for that, I do think it’s fair to say that without the index project, the commission would have had considerably less evidence on which to rest its work.
Follow us on Twitter using #electiondata and get the latest data dispatches, research, and news by subscribing today.