Abstract
Understanding when and why people accept electoral outcomes is vital to the study of political psychology, mass politics, and democratic governance. Despite its clear theoretical and practical importance, however, most existing research on electoral legitimacy have been cross-sectional or have only examined a narrow range of theoretically relevant factors. In this data brief, we present data for two independent panel studies. Sample 1 completed two-waves before/after the 2020 U.S. Presidential Election (Wave 1: N = 1,079, Wave 2: N = 903). Sample 2 approximated national representativeness and completed four-waves before/after the 2020 U.S. Presidential Election and before/after the 2022 U.S. Congressional Midterm Elections (W1: N = 1,127, W2: N = 769, W3: N = 506, W4: N = 453). Both surveys include a wide range of measures theoretically relevant to perceived electoral illegitimacy and political attitudes and behavior. These data enable the scientific study of critical questions that surround perceived electoral legitimacy in American politics.
Background & Summary
Elections are the cornerstone of democratic governance, conferring legitimacy to authorities and fostering trust in political institutions, leaders, and systems. The perceived fairness and transparency of electoral processes are vital to ensuring public acceptance of outcomes and promoting social and political stability1,2. When elections are seen as illegitimate, however, democratic norms and systems may face significant challenges, including increased partisan polarization and conflict, ideological extremism and intolerance, public unrest, and even political violence. Understanding when and why people accept electoral outcomes, and its consequences for civic engagement, social protest, democratic norms, electoral outcomes, and more, is vital to the study of political psychology, mass politics, and democratic governance. In this data brief, we present an original dataset which includes a wide variety of outcome and explanatory measures that uniquely enables researchers and practitioners to investigate perceived electoral (il)legitimacy in the 2020 U.S. Presidential Election and 2022 U.S. Midterm Elections. The dataset builds upon the findings of other studies in the literature on election legitimacy, while including many novel political and psychological constructs as well. We anticipate that these data will complement existing work being done by researchers and practitioners.
Recent polls indicate that the majority of Republicans believe that the results of 2020 election were illegitimate3. Why did some Americans reject the outcome of this election and what consequence might it have for political attitudes and behavior? The 2020 U.S. Presidential Election was uniquely contentious and complicated, occurring amid a pandemic, widespread social and political upheaval, pervasive misinformation, and exaggerated allegations of voter fraud4,5,6,7. Conspiracy theories about voter fraud and foreign interference, for example, may have undermined confidence in the process; that these false claims were often amplified by political leaders raises vital questions about the antecedents and consequences of perceived illegitimacy in this electoral context.
Despite its clear theoretical and practical importance, however, much of the existing research on electoral legitimacy has predominantly been cross-sectional and focused on a limited set of predictors and outcomes. One particular advantage of using panel data is to study pre- and post-election outcomes, such as the effect of being a “winner” or a “loser” in the election on attitudes toward vote count accuracy and trust in government8,9. While important advances in the literature on election (il)legitimacy used panel data from representative samples to measure pre- and post-election attitudes8,9,10, researchers are largely limited by what questions are included in these large representative studies. These limitations in existing panel datasets may mean that many important theoretical relationships are understudied. For example, prior studies have highlighted the influence of partisan identity, conspiratorial thinking, and group prejudice on perceptions of electoral fairness and legitimacy11,12,13,14,15,16. The current dataset allows for a rigorous replication of existing insights and novel investigations of the interplay of these factors with a broad range of theoretically important individual differences, psychological constructs, and political beliefs, values, and behaviors, over time.
Critically, few existing studies of electoral legitimacy have employed longitudinal designs to examine how perceptions evolve before and after election results are announced, and its consequences for subsequent elections (i.e., 2022 U.S. Midterm Election). The current data brief helps to address these limitations by making available data from two panel surveys, one with a two wave design and another with a four wave design. That is, many of our measures and constructs are repeated across waves, allowing researchers to investigate stability and change in political beliefs and behaviors as a function of individual differences, psychological constructs, and the outcome of the 2020 and 2022 elections. By observing the same units over time, panel data enable statistical techniques like difference-in-differences and lagged variable models, which can help establish temporal ordering and mitigate endogeneity concerns. These techniques allow researchers to control for unobserved heterogeneity through fixed or random effects, thereby reducing omitted variable bias. However, because causal claims still require strong assumptions or natural experiments to rule out confounding factors, we nonetheless caution against overinterpreting causal relationships in these data.
Thus far, researchers utilizing these data have examined the role of need for chaos, big five personality, and conspiratorial thinking in pandemic psychology17,18,19, how collective narcissism predicted perceived illegitimacy of the 2020 election20, the independent influence of political and cognitive sophistication on political belief21, and how support for democratic norms shape affect towards Black or Blue Lives Matter22. Nonetheless, there remain many other potential relationships and research questions to explore with these data.
In summary, these two independent panel surveys provide a valuable resource for examining the factors shaping perceptions of electoral legitimacy and their implications for democratic societies. In the context of growing polarization, misinformation, and conflict, these data offer an opportunity to better understand the dynamics of electoral legitimacy and its role in shaping political behaviors and democratic societies.
Methods
We collected two samples of voting-aged Americans: (1) a sample from Amazon Mechanical Turk (MTurk) and (2) a sample from Forthright. Both samples were conducted online through Qualtrics, and participants were only recruited using online methods. For each sample, we collected data at multiple time periods before and after the 2020 and 2022 national U.S. elections (two waves for the MTurk sample and four waves for the Forthright sample). IRB approval from Stony Brook University (Approval Number: IRB2020-00608) and Lehigh University (Approval Number: 1957520-2) was obtained prior to data collection. We obtained informed consent from our subjects and they specifically consented to their answers being shared.
Regarding the first sample, although MTurk is a convenience sample and is not representative of any populations, it is more diverse than student samples23,24 and still allowed us to study causation25. Participants on MTurk were recruited using CloudResearch Turk Prime participant targeting. They saw an advertisement for the study and then could choose to enroll. We ran two waves in MTurk. The first wave was conducted between late October and early November, prior to the 2020 presidential election (N = 1090). The second wave was conducted after the election, from the middle of November until the end of the month (N = 960).
As for the second sample, we used Forthright, an online research panel affiliated with Bovitz, Inc., which is a market research firm that specializes in building and maintaining panels of participants that match national-level demographic information. While not a nationally representative sample, statistical tests show that Forthright holds up well against probability-based samples in approximating national representativeness26. Participants were recruited through the survey firm and randomly selected to participate. We ran four waves. The first wave was fielded between late October and early November, prior to the 2020 presidential election (N = 1127), and the second wave ran in the middle of November after the 2020 election (N = 769). The third wave was collected during the 2022 midterm elections (N = 506), and the fourth wave ran following the election until the end of November (N = 453). We include a timeline of when we ran out studies in Fig. 1.
Timeline of when we ran our studies. Wave 1 from both Sample 1 (MTurk) and Sample 2 (Forthright) were fielded prior to the 2020 presidential election. Wave 2 from both samples occurs after the 2020 presidential election. Wave 3 from Sample 2 occurs before the 2022 midterm election, and Wave 4 occurs after.
Participants
Only English-speakers and people living in the United States over 18 years of age were recruited to participate in our survey. After participants were recruited, they received a link to complete the survey via Qualtrics. Participants completed the approximately 15-minute survey and received compensation. In the MTurk sample, they earned $2.00 for the first wave and $1.00 for completing the second wave. Participants in the Forthright sample received $5.17 for their initial participation and then $3.14, $5.17, and $3.14 for each subsequent wave respectively.
Tables 1, 2 summarize key demographic information across the various waves of our two samples. Table 2 shows statistics on the age, gender, race and ethnicity, educational attainment, income, and ideology of respondents in the MTurk sample. Likewise, Table 1 shows the same descriptive statistics for respondents from the Forthright sample.
Materials
Our surveys used multiple validated scales and unique measures to assess relevant political and psychological constructs. We used the following scales in Sample 1 (Mturk): System identity threat27; support for legal, illegal, or violent activism22,28; religiosity29; conspiratorial predispositions4; democratic norms12,22,30; Covid-19 and election Beliefs31; beliefs in voter fraud and election conspiracies15; fraud and interference beliefs4; perceived electoral fairness31; partisan strength and identity32; learned helplessness22,33; right-wing authoritarianism34; ambivalent sexism35; egalitarianism36,37; immigrant resentment15; bias awareness38; racial resentment39; conformity to masculine norms40; moral foundations41; need for chaos17,28; need for structure42; big five inventory19,43; anti-intellectualism18,44; authoritarianism45; cognitive reflection test46; national identity47; patriotism15; collective narcissism20,48; and intellectual humility49. We also developed and administered novel questions to ask about political interest and engagement, political knowledge, media use, Covid-19 conspiracies18,19,21, ideology, voting intentions and preferences, trust22, vaccine uptake, behaviors related to Covid-1919, outcome fairness and acceptance, satisfaction22, internal and external efficacy22, perceived vulnerability to disease, illusion of explanatory depth, white identity, and loser status. We also included feeling thermometers on Liberals, Conservatives, Republicans, Democrats, Donald Trump, Joe Biden, Mitch McConnell, Nancy Pelosi, QAnon, Antifa, Black Lives Matter, Blue Lives Matter, Black Americans, White Americans, Latino Americans, Asian Americans, Muslims, Christians, Jews, and Atheists.
In the Forthright sample (Sample 2), we included the following measures: System identity threat27; support for political violence50; religiosity29; conspiratorial predispositions4; democratic norms12,22,30; Covid-19 and election Beliefs31; beliefs in voter fraud and election conspiracies15; fraud and interference beliefs4; perceived electoral fairness31; partisan strength and identity32; learned helplessness22,33; right-wing authoritarianism34; egalitarianism36,37; immigrant resentment15; bias awareness38; racial resentment39; moral foundations41; big five inventory19,43; anti-intellectualism18,44; authoritarianism45; cognitive reflection test46; national identity47; patriotism15; collective narcissism20,48; support for legal, illegal, or violent activism22,28; and intellectual humility49. We also developed and administered questions about political interest and engagement, political knowledge, media use, Covid-19 conspiracies18,19,21, ideology, voting intentions and preferences, trust22, vaccine uptake, behaviors related to Covid-1919, outcome fairness and acceptance, satisfaction22, internal and external efficacy22, perceived vulnerability to disease, illusion of explanatory depth, white identity, loser status, attitudes toward January 6th, issue positions, and attitudes toward misinformation. We also included feeling thermometers on Liberals, Conservatives, Republicans, Democrats, Donald Trump, Joe Biden, Mitch McConnell, Nancy Pelosi, QAnon, Antifa, Black Lives Matter, Blue Lives Matter, Black Americans, White Americans, Latino Americans, Asian Americans, Muslims, Christians, Jews, and Atheists.
Finally, a wide range of demographic information was collected across both samples, including age, gender, race and ethnicity, income, educational attainment, etc. We also collected zip-codes as geographic indicators in both samples. For Sample 2 (Forthright), we also collect regional variables, like rural, urban, etc. Full question wording (including response scales and coding) can be found in the Codebook via OSF.
Question blocks
Our goal was to include many types of scales and questions (as seen above) at different waves. For Sample 1, most questions were asked in both waves. Some questions were only included in the first wave, and all participants responded to them (Covid-19 conspiracies, political interest and engagement, political knowledge, media use, and religiosity). We also wanted to include many additional scales and questions but we were limited with our sample size and survey length. To increase the number of questions captured, we decided to break our samples into two blocks. Subjects were randomly selected into one of the two blocks. Each block contained a different set of questions. Note that we did not have any experimental hypotheses, we were trying to maximize the responses we received on various types of questions. In Block 1 of Sample 1, participants responded to the following questions: right-wing authoritarianism, egalitarianism, perceived vulnerability to disease, immigrant resentment, bias awareness, racial resentment, conformity to masculine norms, and ambivalent sexism. In Block 2, participants received a different set of questions: big five inventory, moral foundations questions, anti-intellectualism, illusion of explanatory depth, authoritarianism, cognitive reflection test, national identity, patriotism, white identity, collective narcissism, intellectual humility, need for chaos, and need for structure. Note that both randomized blocks were only run in the first wave of the survey. We used the same randomized block method for Sample 2. Here, Block 1 contained questions on right-wing authoritarianism, egalitarianism, perceived vulnerability to disease, immigrant resentment, bias awareness, and racial resentment. Block 2 contained questions on the big five, moral foundations, anti-intellectualism, illusion of explanatory depth, authoritarianism, cognitive reflection test, national identity, patriotism, white identity, and collective narcissism. Both Wave 3 and 4 in Sample 2 included new questions but all these questions were asked to the entire sample, not to randomized blocks.
Data cleaning
Once we received our data via Qualtrics, we proceeded to anonymize the datasets by removing identifying information like IP Address, location, etc. Each wave from each sample came in a different datafile. Each variable in the datafiles was renamed to match the intent of the variable. Scales were labelled so that readers can differentiate between the first, second, third, etc. items. Most of our variables were recoded from 0 to 1—coding information on each variable is included in the Codebook. Additionally, missing responses were recoded to “NA”. Once each datafile was cleaned, we then merged all waves of each sample together to create one large dataset per sample. Variables are marked with “_w1” if they were asked in Wave 1, “_w2” if found in Wave 2, and so forth. When there were multiple items for a scale, we created those scales in each wave that they appeared. For your reference, we also include information on the Cronbach’s alpha, mean, and standard deviation of each scale in the codebook. Within samples, we also run t-tests to check for differences across waves in our scales. We report the t-stat, degrees of freedom, and p-value in the Codebook as well.
Data Records
The dataset is available at Open Science Framework (OSF)51, with this section being the primary source of information on the availability and content of the data being described.
All materials corresponding to this project can be found on Open Science Framework (OSF), linked here: https://osf.io/r9w85/. The folder titled Raw Data contains all six data files associated with this project (two MTurk waves and four Forthright waves). All raw data files have been anonymized and are saved as CSV files.
We used R Studio to clean the datasets and run statistical analyses. We used the following R packages: tidyverse52, car53, dplyr54, and psych55. The folder R Scripts contains important R Scripts: First, we uploaded a general clean up file for each sample (Mturk Cleaning and Forthright Cleaning)—this code renames variables in an intuitive way, drops missing observations, recodes variables, and creates scales. This code also merges waves together into a combined CSV file for each sample. The next R Script contains code to test the scales created in each sample (Scale Testing)—each scale has code for Cronbach’s alpha, mean, and standard deviation. We also run t-tests to check for differences across waves. The Descriptive Information R Script includes code for descriptive statistics on the demographic variables captured (this code was used to generate Tables 1 and 2). Finally, the last R Script includes code measuring attrition across our waves using chi-squared and t-tests (Attrition Analysis).
We have a Cleaned Data folder that contains cleaned data from our two samples. The last folder is titled Codebook and Other Files. First, we have the codebook, which contains question wording as well as the results of the statistical tests for scales generated. Next, we have the Attrition Analysis which shows differences in the demographic information across each wave of both samples.
All the files a part of this packet have been cleaned and prepared for immediate data analysis. Data files are saved as CSV files so researchers can use a wide variety of programs for data analysis, including Excel, R Studio, Stata, etc. Clean up files and statistical testing files were generated using R Studio, so we recommend R Studio for data analysis. We also recommend referencing our Codebook for question wording as well as more detailed information on scale reliability and other statistical tests.
For the sake of subject privacy, we have removed key identifying information from the datasets.
Technical Validation
To validate the data we collected, we ran several tests using both samples. First, we checked the reliability of all scales generated in each wave. We report Cronbach’s alpha for each scale in our codebook, as well as provide code for this type of testing in our materials. We also run t-tests comparing our scales across waves. In Sample 1, we compared scales in Wave 1 to Wave 2. For Sample 2, we ran t-tests between each wave that a scale was measured. For example, System Identity Threat was captured in all four waves, so we looked for differences from Wave 1 to Wave 2, Wave 1 to Wave 3, Wave 1 to Wave 4, Wave 2 to Wave 3, Wave 2 to Wave 4, and Wave 3 to Wave 4. These results are reported in the Codebook.
Since we use multiple waves to create panel data, we also assessed the attrition across our samples. In Sample 1, 12% of the sample from Wave 1 drops out by Wave 2. We experienced more significant attrition across the waves for Sample 2: 29% drops from Wave 1 to Wave 2, 34% drops out from Wave 2 to Wave 3, and 10% drops from Wave 3 to Wave 4. From Wave 1 to Wave 4, Sample 2 decreases by 58%. We ran t-tests and chi-squared tests to look at differences in the demographic characteristics at different waves. These results are reported in the Attrition Analysis document.
Code availability
All code is immediately available on OSF (https://osf.io/r9w85/) for researchers. This folder includes raw data, important R Script files for cleaning and statistical testing, and cleaned data. We also include a codebook for user guidance.
References
Tyler, T. R. Psychological Perspectives on Legitimacy and Legitimation. Annual Review of Psychology 57, 375–400 (2006).
Losers’ Consent: Elections and Democratic Legitimacy. (Oxford University Press, Oxford, New York, 2005).
Montanaro, D. Most Americans trust elections are fair, but sharp divides exist, a new poll finds. NPR (2021).
Edelson, J., Alduncin, A., Krewson, C., Sieja, J. A. & Uscinski, J. E. The Effect of Conspiratorial Thinking and Motivated Reasoning on Belief in Election Fraud. Political Research Quarterly 70, 933–946 (2017).
Gilberstadt, H. Large majority of Americans expect that foreign governments will try to influence the 2020 election. Pew Research Center https://www.pewresearch.org/short-reads/2020/02/12/large-majority-of-americans-expect-that-foreign-governments-will-try-to-influence-the-2020-election/ (2020).
Gómez, V. & Jones, B. As COVID-19 cases increase, most Americans support ‘no excuse’ absentee voting. Pew Research Center https://www.pewresearch.org/short-reads/2020/07/20/as-covid-19-cases-increase-most-americans-support-no-excuse-absentee-voting/ (2020).
House, B. & Dennis, S. T. Trump says undocumented immigrants cost him popular vote. Bloomberg (2017).
Sances, M. W. & Stewart, C. Partisanship and confidence in the vote count: Evidence from U.S. national elections since 2000. Electoral Studies 40, 176–188 (2015).
Anderson, C. J. & LoTempio, A. J. Winning, Losing and Political Trust in America. British Journal of Political Science 32, 335–351 (2002).
Anderson, C. J., Blais, A., Bowler, S., Donovan, T. & Listhaug, O. Losers’ Consent: Elections and Democratic Legitimacy. https://doi.org/10.1093/0199276382.001.0001 (Oxford University Press, 2005).
Appleby, J. & Federico, C. M. The racialization of electoral fairness in the 2008 and 2012 United States presidential elections. Group Processes & Intergroup Relations 21, 979–996 (2018).
Bartels, L. M. Ethnic antagonism erodes Republicans’ commitment to democracy. Proceedings of the National Academy of Sciences 117, 22752–22759 (2020).
Buyuker, B. & Filindra, A. Democracy and “the Other”: Anti-Immigrant Attitudes and White Support for Anti-Democratic Norms. SSRN Scholarly Paper at https://doi.org/10.2139/ssrn.3585387 (2020).
Wilson, D. C. & King-Meadows, T. Perceived electoral malfeasance and resentment over the election of Barack Obama. Electoral Studies 44, 35–45 (2016).
Udani, A. & Kimball, D. C. Immigrant Resentment and Voter Fraud Beliefs in the U.S. Electorate. American Politics Research 46, 402–433 (2018).
Wolak, J. How campaigns promote the legitimacy of elections. Electoral Studies 34, 205–215 (2014).
Alam, R. & Vitriol, J. Reveling in Mayhem: The Need for Chaos in Pandemic Psychology. Journal of Social Issues (in press).
Farhart, C. E., Douglas-Durham, E., Lunz Trujillo, K. & Vitriol, J. A. Chapter Seven - Vax attacks: How conspiracy theory belief undermines vaccine support. in The Politicization of Science (eds. Bolsen, T. & Palm, R.) vol. 188 135–169 (Progress in Molecular Biology and Translational Science, 2022).
Panish, A. R., Ludeke, S. G. & Vitriol, J. A. Big five personality and COVID-19 beliefs, behaviors, and vaccine intentions: The mediating role of political ideology. Social and Personality Psychology Compass 17, e12885 (2023).
Federico, C. M., Farhart, C., Vitriol, J. & Zavala, A. G. de. Collective Narcissism and Perceptions of the (Il)legitimacy of the 2020 US Election. The Forum 20, 37–62 (2022).
Vitriol, J. A., Sandor, J., Vidigal, R. & Farhart, C. On the Independent Roles of Cognitive & Political Sophistication: Variation Across Attitudinal Objects. Applied Cognitive Psychology 37, 319–331 (2023).
Vitriol, J. A., Sandor, J. & Farhart, C. E. Black and Blue: how democratic attitudes shape affect toward Blue or Black Lives Matter. Front. Soc. Psychol. 2 (2024).
Berinsky, A. J. Rumors and Health Care Reform: Experiments in Political Misinformation. British Journal of Political Science 47, 241–262 (2017).
Krupnikov, Y., Nam, H. H. & Style, H. Convenience Samples in Political Science Experiments. in Advances in Experimental Political Science 165–183. https://doi.org/10.1017/9781108777919.012 (Cambridge University Press, 2021).
Druckman, J. N. & Kam, C. D. Students as Experimental Participants: A Defense of the ‘Narrow Data Base’. in Cambridge Handbook of Experimental Political Science (eds. Druckman, J. N., Green, D. P., Kuklinski, J. H. & Lupia, A.) (Cambridge University Press, 2011).
Forthright Access. https://www.forthrightaccess.com/api.
Federico, C. M., Williams, A. L. & Vitriol, J. A. The role of system identity threat in conspiracy theory endorsement. European Journal of Social Psychology 48, 927–938 (2018).
Petersen, M. B., Osmundsen, M. & Arceneaux, K. The “Need for Chaos” and Motivations to Share Hostile Political Rumors. American Political Science Review 117, 1486–1505 (2023).
Klofstad, C. A., Uscinski, J. E., Connolly, J. M. & West, J. P. What drives people to believe in Zika conspiracy theories? Palgrave Commun 5, 1–8 (2019).
LAPOP. AmericasBarometer. (2018).
American National Election Studies. ANES 2020 Time Series Study Full Release. (2021).
Huddy, L., Mason, L. & Aarøe, L. Expressive Partisanship: Campaign Involvement, Political Emotion, and Partisan Identity. American Political Science Review 109, 1–17 (2015).
Quinless, F. W. & Nelson, M. M. Development of a measure of learned helplessness. Nursing Research 37, 11–15 (1988).
Duckitt, J., Bizumic, B., Krauss, S. W. & Heled, E. A Tripartite Approach to Right-Wing Authoritarianism: The Authoritarianism-Conservatism-Traditionalism Model. Political Psychology 31, 685–715 (2010).
Glick, P. & Fiske, S. T. The Ambivalent Sexism Inventory: Differentiating hostile and benevolent sexism. Journal of Personality and Social Psychology 70, 491–512 (1996).
Feldman, S. Structure and Consistency in Public Opinion: the Role of Core Beliefs and Values. American Journal of Political Science 32, 416–440 (1988).
Feldman, S. & Steenbergen, M. R. The Humanitarian Foundation of Public Support for Social Welfare. American Journal of Political Science 45, 658 (2001).
Perry, S. P., Murphy, M. C. & Dovidio, J. F. Modern prejudice: Subtle, but unconscious? The role of Bias Awareness in Whites’ perceptions of personal and others’ biases. Journal of Experimental Social Psychology 61, 64–78 (2015).
Kinder, D. R. & Sanders, L. M. Divided by Color: Racial Politics and Democratic Ideals. (University of Chicago Press, 1996).
Parent, M. & Moradi, B. An Abbreviated Tool for Assessing Conformity to Masculine Norms: Psychometric Properties of the Conformity to Masculine Norms Inventory-46. Psychology of Men & Masculinity 12, 339–353 (2011).
Graham, J. et al. Moral Foundations Questionnaire. https://doi.org/10.1037/t05651-000 (2011).
Thompson, M. M., Naccarato, M. E. & Parker, K. E. Personal Need for Structure Scale. https://doi.org/10.1037/t00912-000 (1989).
John, O. P., Donahue, E. M. & Kentle, R. L. Big Five Inventory. https://doi.org/10.1037/t07550-000 (1991).
Oliver, J. E. & Rahn, W. M. Rise of the Trumpenvolk: Populism in the 2016 Election. The ANNALS of the American Academy of Political and Social Science 667, 189–206 (2016).
Feldman, S. & Stenner, K. Perceived Threat and Authoritarianism. Political Psychology 18, 741–770 (1997).
Tappin, B. M., Pennycook, G. & Rand, D. G. Bayesian or biased? Analytic thinking and political belief updating. Cognition 204, 104375 (2020).
Huddy, L. & Ponte, A. D. National Identity, Pride, and Chauvinism—their Origins and Consequences for Globalization Attitudes. in Liberal Nationalism and Its Critics: Normative and Empirical Questions (eds. Gustavsson, G. & Miller, D.) 0. https://doi.org/10.1093/oso/9780198842545.003.0003 (Oxford University Press, 2019).
Golec de Zavala, A. & Federico, C. M. Collective narcissism and the growth of conspiracy thinking over the course of the 2016 United States presidential election: A longitudinal analysis. European Journal of Social Psychology 48, 1011–1018 (2018).
Porter, T. & Schumann, K. Intellectual humility and openness to the opposing view. Self and Identity 17, 139–162 (2018).
Diamond, L., Drutman, L., Lindberg, T., Kalmoe, N. P. & Mason, L. Americans Increasingly Believe Violence is Justified if the Other Side Wins - POLITICO. Politico (2020).
Alva, D. P., Vitriol, J. A. & Farhart, C. Panel Data on Perceived Electoral Legitimacy using Two Independent Samples. OSF https://doi.org/10.17605/OSF.IO/R9W85 (2024).
Wickham, H. & RStudio. tidyverse: Easily Install and Load the ‘Tidyverse’. (2023).
Fox, J. et al. car: Companion to Applied Regression. (2024).
Wickham, H. et al. dplyr: A Grammar of Data Manipulation. (2023).
Revelle, W. psych: Procedures for Psychological, Psychometric, and Personality Research. (2024).
Acknowledgements
The authors would like to thank the Research Foundation for The State University of New York, the Stony Brook Foundation, Inc., the College of Arts and Sciences at Stony Brook University, the College of Business at Lehigh University, and Carleton College for providing the resources needed to collect the data used in this research.
Author information
Authors and Affiliations
Contributions
Daniella P. Alva: Data Cleaning, Data Analysis, and Writing. Joseph Vitriol: Conceptualization, Data Collection, and Writing. Christina Farhart: Conceptualization, Data Collection, and Writing.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Alva, D.P., Vitriol, J.A. & Farhart, C. Panel Data on Perceived Electoral Legitimacy using Two Independent Samples. Sci Data 12, 1684 (2025). https://doi.org/10.1038/s41597-025-04980-3
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41597-025-04980-3
