Shaky political “science” misses mark on ranked choice voting
What does political science research say about ranked choice voting? A number of misleading studies fall well short of real science
By Steven Hill, DemocracySOS, December 9, 2025
Ranked choice voting (RCV) is an increasingly common election method in the United States. Since 2004, RCV has been used in over a thousand elections by 14 million voters in over 50 cities, counties and states, at local, state and federal levels. Researchers have increasingly published academic papers assessing impacts of RCV on representation, turnout, voter error, participation of various demographic groups, and more. This article summarizes a research paper that we published recently on the Social Science Research Network (SSRN) open-access online platform, assessing the quality and credibility of 41 different studies on RCV (a link to the SSRN paper is here).
We noted a disturbing pattern. Many of the studies that found flaws with RCV used online surveys and mathematical models instead of data from the many real-world RCV elections. But the studies that found benefits with RCV used actual election results. The anti-RCV studies tended to design their surveys and models rather sloppily, in ways that don’t reflect what happens in real-world elections. Many of them often lacked comparative context and failed to assess how alternative electoral systems (e.g., plurality, two-round runoffs, Condorcet, approval, or fusion voting) would perform on similar metrics. Usually the critique advanced against RCV applied even more to these other electoral systems, but that fact was not mentioned or analyzed.
For example, a number of studies used online mock elections with almost no explanation of how RCV works, then asked users if they were confused. In real RCV elections, there are voter education campaigns, actual candidates campaigning, media reportage and other features of an actual election that were entirely absent from these studies. The results from such flawed designs often contradicted the results from studies based on real-world election data.
Other studies made sweeping claims based on a small sample size and limited evidence, in some cases only a single election. The researchers often cited other research that supports their conclusions, but ignored contradictory research. Citations of the flawed research served the purpose of legitimizing their own flawed research – an unvirtuous circle. Most of these studies did not pass through a rigorous peer-review process.
So the state of much of the political and social science research on RCV is shaky, at best. A number of political scientists have engaged in dubious research practices, including cherry-picking of data and research, misrepresenting what certain studies concluded, citing other researchers’ deeply flawed studies, and failing to recognize the obvious defects in particular studies. One wonders if these political and social scientists, often in a rush to finish their own academic papers, actually read the very research papers they are citing in their own work. In this way, bad research has been perpetuating bad research.
Nevertheless these studies have become influential due to being uncritically and repeatedly cited by other researchers, media outlets and anti-RCV activists across the US, including MAGA Republicans and political scientists backing alternative electoral systems.
Examples of shaky political science research
Below are a few examples of the 41 studies that we analyzed in our report. These examples are only summaries of our analysis, see the full report for more detailed assessments. In future DemocracySOS articles, we will focus on specific individual studies with more granular details. We examined studies from a number of well-known political scientists and academics, including Nolan McCarty, Lawrence Jacobs, Peter Buisseret, Edward Foley, Benjamin Reilly, Caroline Tolbert, Joseph Coll, Martha Kropf, David Lublin, Todd Donovan, Lee Drutman, David Kimball, Joseph Anthony and more, as well as from universities and policy institutes such as Harvard, Princeton, University of Pennsylvania, Yale, Columbia, FairVote, New America, RepresentWomen and others.
1. “Beyond the Spoiler Effect: Can Ranked Choice Voting Solve the Problem of Political Polarization?” by Nathan Atkinson, Edward B. Foley and Scott Ganz, 2024. SSRN. This study is analyzed in our paper starting on page 3 at this link. This non-peer-reviewed research paper by three university academics used mathematical modeling methods to assess the impact of ranked choice voting on political polarization. The researchers’ methodology did not use any real-world election data from the nearly thousand actual RCV elections, including from Alaska, one of the focuses of their study, which has had dozens of RCV elections. Instead, they used a respondent survey and simulated elections to make up votes and candidates and then plugged them into their theoretical model.
Specifically, they simulated Alaska elections by assigning survey respondents a partisanship score from -0.5 to +0.5, with negative being left-leaning and positive being right-leaning. They then randomly selected four candidates along this partisanship spectrum and used computer simulations to “tally” elections based on assumptions that each voter would rank the candidates based on how close the candidate’s partisanship score was to the voter partisanship score. Based on these far-fetched simulations the researchers concluded that RCV results caused an increase in polarization.
It’s worth noting that the authors of this study prefer a rarely used electoral system called Condorcet Voting, which uses a different method for counting ranked ballots and has never been used in public elections despite first being proposed by the Marquis de Condorcet in the 18th century. But the authors revealed how much their computer simulations were divorced from reality when they reported that their model shows “IRV selects the Condorcet winner in only 60% of elections” (Note: IRV, or ‘instant runoff voting,’ is another name for ranked choice voting). But in actual real-world elections, IRV has consistently selected the Condorcet winner 99% of the time, as documented by other studies included in our report. Also other researchers, notably Australian political scientist Benjamin Reilly, found a reduction in polarization with RCV in a number of actual election settings, including in Alaska. Our paper analyzes nine other studies about RCV and polarization which mostly conclude that RCV has a moderating influence, since the winner must be at least a second or other choice of a majority of voters.
Yet Atkinson, Foley and Ganz don’t consider any of these studies or analyze why these reached a different conclusion from their research. Despite having 152 footnotes in their paper with dozens of listed sources, they ignored these other papers whose conclusions contradict their own. Their report smacks of cheerleading for Condorcet Voting, with information cherry-picked to attack RCV. What this study mainly revealed is the unreliability of mathematical models if not based on realistic assumptions and sound methodology.
Poor understanding of RCV = faulty research assumptions
While many of the flawed studies used online surveys and mathematical models instead of data from real-world RCV elections, even some of the studies based on actual election results made puzzling assumptions and extrapolations that seemed to indicate the researcher did not really understand how RCV works in the real world, or why voters make some of their choices. Here are two examples of such studies.
2. “Minority Electorates and Ranked Choice Voting,” by Nolan McCarty, 2024. Center for Election Confidence. This widely cited, non-peer reviewed study analyzed the use of RCV in New York City’s Democratic primary elections in 2021 for mayor and city council, and Alaska’s Top Four primary and general elections in 2022 for the US House of Representatives. McCarty’s study is a good example of how a combination of poor methodology, poor understanding of RCV (and elections in general), and lack of context can combine to produce sloppy work that is poor political “science.” Yet this study has become influential due to being uncritically and repeatedly cited by media outlets, other academics and anti-RCV activists.
McCarty’s study concludes that “electorates with heavy concentrations of ethnic and racial minorities have substantially higher rates of ballot exhaustion,” and this hurts minority voters (“ballot exhaustion” means that not all candidates were ranked on a voter’s ballot, and so any voter whose ranked candidates were all eliminated from contention disadvantaged her/himself). But McCarty’s study actually did not find what it claimed. It actually reported some races in which racial minority voters were more likely to have “exhausted” RCV ballots and other races where that was not true. To account for this discrepancy, the researcher invented two new metrics (called “adjusted exhausted ballots” and “potential exhausted ballots”) to discount the races where it was not true. That kind of cherry-picking undermines the study’s overall credibility. Moreover, McCarty’s conclusion ignored the fact that most races are decided without using all of a voter’s rankings. For example, if a voter ranked the winning candidate as her first choice, her lower rankings will never be used as part of the “instant runoff” tally process. Thus, differences in ballot exhaustion rates are rarely a crucial factor in election outcomes.
McCarty also did not acknowledge other studies that have demonstrated that RCV actually makes more ballots count for minority voters, especially when compared to one-choice plurality elections, in which if you pick a losing candidate your vote is wasted, or traditional two-round runoffs, in which a voter who fails to return to vote in the second round of a runoff, does not contribute to the final decision. RCV allowed more voters’ ballots to count in New York City because RCV replaced extremely low-turnout “delayed runoff” elections with an “instant runoff” which resulted in NYC’s highest voter turnout in 30 years. Any number of exhausted ballots in New York City’s “instant runoff” was dwarfed by the number of voters who had their voices heard with RCV, unlike in a plurality election in which if you pick the wrong candidate, you lose and your vote is wasted.
Also, like the Atkinson, Foley and Ganz study, McCarty failed to consider other studies of New York or Alaska’s RCV elections which came to the opposite conclusion (which are included in our study). He also blatantly cherrypicked from another study we analyzed in our study, (Kimball and Anthony, “Voter Participation with Ranked Choice Voting in the United States”), ignoring a key detail that contradicted his own conclusion.
McCarty’s miscues were made particularly puzzling because he did not even attempt to integrate into his research design the actual election results: both jurisdictions saw historic increases in their election of racially diverse officeholders in their first RCV elections, with a majority of elected NYC council members being women of color and two-thirds women — one of the most racially and gender diverse city councils in the country. And Alaska elected its first Alaska Native woman to its single US House seat. If RCV and its exhausted ballots were so disadvantageous for racial and ethnic minority voters and candidates, as McCarty claimed, how did those constituencies benefit from such stunningly successful election results?
It is worth noting that McCarty’s research was funded and promoted by a conservative anti-RCV group, the Center for Election Confidence. McCarty has targeted RCV for years, including acting as a paid expert witness for a failed federal case brought in 2020 against RCV in Maine, where the plaintiffs sought a federal ban on RCV based on McCarty’s previous report alleging that RCV resulted in voter disenfranchisement. Fellow Princeton professor Sam Wang thoroughly rebutted McCarty’s report, calling it “erroneous,” suffering “from several weaknesses” and containing “claims and conclusions likely to mislead or be misconstrued.” Federal judge Lance Walker, in his opinion finding against the plaintiffs, was scathing in his criticism of McCarty’s report. Yet McCarty’s latest study continues to be widely cited by the media, used by RCV opponents to spread misinformation and cited by other researchers.
McCarty’s deeply flawed study badly misrepresented what actually happened in those New York City and Alaska elections using RCV. The McCarty study is extensively analyzed in our paper “Shaky Political Science,” starting on page 17 at this link.
3. “Writing the Rules to Rank the Candidates: Examining the Impact of Instant-Runoff Voting on Racial Group Turnout in San Francisco Mayoral Elections,” by Jason McDaniel, 2016. Journal of Urban Affairs. This study about San Francisco’s use of RCV in mayoral elections was poorly designed and researched, and failed to account for key real-world election dynamics and crucial local context, despite the author being a professor at San Francisco State University. Nevertheless, it has been cited by at least 35 other academic studies, including most of those critiqued in our paper. Despite being a textbook example of poor political “science,” its shelf life has been extended endlessly by other credulous political scientists who cite it uncritically.
McDaniel’s study purported to show that RCV causes reduced voter turnout, more voter errors, and a negative impact on “marginal populations,” i.e. racial minorities, mainly as the result of the complexity of RCV asking voters to rank candidates and having to be familiar with multiple candidates and related dynamics. He examined five San Francisco mayoral elections from 1995 to 2011, comparing the earlier non-RCV races (in 1995, 1999 and 2003) with the later two RCV races (2007 and 2011). No other elections were studied, even though San Francisco elects 17 other offices with RCV, including the 11 seats on the Board of Supervisors and six other citywide offices. For unexplained reasons, this study ignored at least 34 other RCV races between the years 2004 to 2011, many of which were very competitive.
So the only two RCV elections that McDaniel used as the sole basis for his small data set was the first RCV mayoral election in 2007 in which the winner, popular incumbent Gavin Newsom, had 74% of first choices, and the second election in 2011 which was won by a landslide margin (60-40%) by a recently-appointed yet popular incumbent. Inexplicably, McDaniel’s study did not include data from even a single competitive election, mayoral or other offices, as the basis to determine the impact of RCV on voter turnout.
Also inexplicable, McDaniel made no attempt to assess the impact of the real-world conditions that drive voter turnout, including competitive races vs noncompetitive, races with popular incumbents vs races with open seats, odd year vs even year elections, campaign funding and other crucial factors. Based on little meaningful data, McDaniel concluded that RCV lowered turnout in those two “instant runoff” races when compared to the previous three mayoral elections using two-round “delayed” runoffs.
He also found that voter turnout in the 2011 RCV race declined particularly among Black voters, yet he failed to mention that the 2011 mayoral election did not have any Black candidates (while two of the three non-RCV mayoral elections featured the well-known Black mayor Willie Brown). Other inconsistencies and contradictions plague this study, which are included in the more lengthy analysis in our study.
It wasn’t until June 2018, after McDaniel’s study, that San Francisco conducted its first genuinely open and competitive mayoral election using RCV. That contest, which was decided by a margin of one point and was the most competitive mayoral election in several decades, had the second highest turnout in San Francisco history for a local non-November election, and elected its first African American woman mayor, results that thoroughly contradicted the conclusions of McDaniel’s earlier study.
Despite its numerous flaws McDaniel’s study, like the McCarty study, continues to be cited by other political scientists and a number of media outlets, who are evidently unaware of the many flaws that render these papers meaningless.
Active promotion of flawed anti-RCV studies
A number of the flawed anti-RCV studies analyzed in our report were web-hosted and funded by the Electoral Reform Research Group, which is an ad hoc committee headed by New America’s Political Reform Program and joined by American Enterprise Institute, Unite America Institute and other organizations. This research consortium has been led by Lee Drutman, a political scientist and senior fellow with New America’s Political Reform Program. Drutman, once a prominent supporter of RCV, since 2021 has become a consistent and vocal RCV critic as he has switched his reform preference to a method known as fusion voting and to non-RCV forms of proportional representation.
Drutman has frequently used his public platform to promote the discredited research papers featured in the New America series as well as other dubious studies analyzed in our paper. For example, Drutman published a tweet to his 22,000 Twitter/X followers announcing the publication of the first study mentioned above, “Beyond the Spoiler Effect: Can Ranked Choice Voting Solve the Problem of Political Polarization?” by Atkinson, Foley and Ganz. Drutman tweeted, “Sophisticated modeling analysis shows that RCV elections are likely to make extremism worse. The case for RCV as a force for moderation in our highly polarized politics continues to collapse as scholarship grows.” But as our critique of that paper shows, there was nothing sophisticated in the convoluted modeling for their non-peer reviewed study, nor in the conclusions that were riddled with contradictions and clashed greatly with data from actual real-world elections. Their study was too removed from reality to have any value at all.
While Drutman has frequently cited this and other flawed studies as part of his RCV opposition and pro-fusion activism, he has not cited any of the large number of studies and research that have found many positives about RCV, including nearly two dozen such studies analyzed in our report. Instead, Drutman only promotes the shaky, flawed anti-RCV research.
Real political science, motivated by a search for facts that reflect some semblance of the real world, requires a balanced citation of pro and con research. Yet political and social scientists like Drutman, McCarty, McDaniel and others have violated this basic scientific principle.
A breakdown in political “science”
In a real sense, this paper is shining a spotlight on a substantial flaw in the political science regimen. Surprisingly, outside of the peer review process – which most of these anti-RCV papers were never submitted to – there is limited quality control in political and social science circles over what gets published or even which papers are cited in new academic papers. Various websites that publish huge volumes of academic papers, such as Social Science Research Network (SSRN), ArXiv, SocArXiv, OpenReview.net and PeerJ, do not peer review submissions. Instead, they function as a repository where authors can self-publish their own research papers without undergoing a rigorous review process.
While having a venue in which researchers can post early drafts of their research and solicit feedback has value, unfortunately these easy-publishing, blog-like venues have resulted in the spread of questionable studies and scientific results in a number of academic disciplines. Given the lax standards, these publishing platforms sometimes serve to inadvertently contribute to a dilution of academic rigor and credible research. This becomes especially problematic when, for example, SSRN’s publication process is misunderstood by the general public as the equivalent of peer-reviewed publications (we know wherein we speak, since our own paper was “accepted” for publication on SSRN with no review process whatsoever).
Toward better political science scholarship and research
While many of the studies and citations of RCV reviewed in our paper are severely flawed, many other political scientists are generating credible research on RCV using sound methodologies based on real-world election results. We cited a number of them in our report. Indeed, a long list of noteworthy political scientists, scholars and academic experts have expressed their support for ranked choice voting, including Nobel Prize and Johan Skytte Prize in Political Science winners such as Robert Keohane, Arend Lijphart, Amatrya Sen, Francis Fukuyama, Jane Mansbridge, Robert Putnam, Rein Taagepera, Robert Shiller, Danielle Allen, Larry Diamond, Richard Pildes, William Galston, Lawrence Lessig, G. Bingham Powell and many others. In addition, a number of reputable academic-led commissions have endorsed ranked choice voting, including the Commission on the Practice of Democratic Citizenship which recommended RCV for federal elections.
Certainly there is no such thing as a perfect electoral system, all of them have pros and cons, ranked choice voting included. Like many political phenomena, the cause and effect behind voter, candidate or party behavior, especially when it comes to electoral systems, can be hard to tease out. The hopeful goal of this report is that our critique of existing RCV research will lead to much improved scholarship, better media reportage and a greater understanding of ranked choice voting and its impacts.
Please check out the rest of our report at this link.