Abstract
Science is integral to society because it can inform individual, government, corporate, and civil society decision-making on issues such as public health, new technologies or climate change. Yet, public distrust and populist sentiment challenge the relationship between science and society. To help researchers analyse the science-society nexus across different geographical and cultural contexts, we undertook a cross-sectional population survey resulting in a dataset of 71,922 participants in 68 countries. The data were collected between November 2022 and August 2023 as part of the global Many Labs study “Trust in Science and Science-Related Populism” (TISP). The questionnaire contained comprehensive measures for individuals’ trust in scientists, science-related populist attitudes, perceptions of the role of science in society, science media use and communication behaviour, attitudes to climate change and support for environmental policies, personality traits, political and religious views and demographic characteristics. Here, we describe the dataset, survey materials and psychometric properties of key variables. We encourage researchers to use this unique dataset for global comparative analyses on public perceptions of science and its role in society and policy-making.
Similar content being viewed by others
Background & Summary
Scientific evidence and expertise are fundamental to society. They can inform policy-making, individual decision-making, and public discourse about fundamental challenges to humanity, such as climate change and pandemic response1. Yet to effectively fulfil this role, scientists need both to signal trustworthiness and to be perceived as trustworthy by the public2. Otherwise science will lose legitimacy and thus be limited in its capacity to provide the best available knowledge to society3,4.
Some scholars and pundits, media reports, and empirical studies have concluded that public trust in science is in decline in many countries. They suggest that the epistemic authority of science has been challenged by: politically motivated resentment5,6; concerns about scientists illegitimately intruding in policy-making, public debate, and people’s personal lives7,8; populist claims about academic elites disregarding common sense in favour of allegedly useless scientific knowledge9,10; increased exposure to science-related disinformation and conspiracy theories on social media11,12; and scepticism towards scientific evidence and policy advice on major societal issues like climate change13,14,15. This has sparked concerns about a public “breach of faith with science”16, but robust evidence is largely missing17.
We investigated these concerns with a global, pre-registered, cross-sectional online survey of N = 71,922 participants in k = 68 countries (see Fig. 1; the term “country” in this article refers to both sovereign states and territories not recognised as such). The survey measured individuals’ (1) trust in science and scientists, (2) science-related populist attitudes, (3) perceptions of the role of science in society, policy-making, and daily life, (4) science-related media use and communication behaviour, (5) attitudes to climate change and support for environmental policies, (6) personality traits, (7) political and religious views and (8) demographic characteristics (see Fig. 2a–d for an overview). In this article, we present the dataset, available in a dedicated repository hosted by the Open Science Framework (OSF) at https://osf.io/5c3qd.
The data were collected between November 2022 and August 2023 as part of the TISP Many Labs project (“Trust in Science and Science-Related Populism”). TISP is an international, multidisciplinary consortium of 241 researchers from more than 170 institutions across all continents. Researchers conducted a pre-tested, pre-registered online survey with 88 post-hoc weighted quota samples in 68 countries, using the same questionnaire translated into 37 languages. The countries cover all inhabited continents, include populations beyond Western, Educated, Industrialised, Rich, and Democratic (WEIRD) societies and represent 31% of all nations worldwide that jointly make up 79% of the global population.
The TISP dataset is a unique resource for global comparative analyses on individual perceptions of science and its role in society and policy-making, science-related media use and communication behaviour, as well as public attitudes to climate change and support for environmental policies. First, the TISP survey provides the first global data on public opinion and communication about science after the COVID-19 pandemic, which had notable and potentially persistent effects on how individuals view science and engage with science-related information18,19,20. Second, it contains well-tested survey scales and comprehensive item inventories for constructs that have previously often only been measured with single items despite their multidimensional structures21,22 or have not been measured in global surveys at all, including trust in scientists23 and science-related populist attitudes24 as well as relevant correlates like outspokenness25 and social dominance orientation26. Third, the TISP dataset includes data from non-WEIRD countries, which have been underrepresented in social science research despite distinctive local contexts that can affect how people think and communicate about science27. The dataset thus offers a valuable opportunity to address an important limitation of extant research, which is that assumptions on public perceptions of science in countries beyond the ‘Global North’ are prone to wrongful generalisations from WEIRD to non-WEIRD countries: For example, studies in WEIRD countries have suggested that trust in science and religiosity are negatively correlated28, but investigations of non-WEIRD countries – where Muslim rather than Christian faith may determine religiosity – have shown the opposite29. Fourth, the TISP survey accounts for regional and cultural specificities as data collection was mostly led or advised by local collaborators in order to avoid “parachute science” practices30.
The TISP dataset allows systematic assessments of public perceptions of science and their predictors and outcomes at a global scale. Cologna et al.31 as well as an online data visualisation dashboard (https://tisp.shinyapps.io/TISP) present such assessments. Yet, they focus on public trust in science and attitudes towards scientists’ role in society and policy-making – but do not explore numerous further potentials of the TISP dataset, such as analyses of science communication behaviour and climate change attitudes, qualitative analyses with responses to open-ended questions and analyses of single countries.
By publishing the TISP dataset and supplementing materials, we seek to promote its Findability, increase its Accessibility to researchers within and outside academia, enable its Interoperability across different use cases, and foster its Reusability (FAIR)32. This will promote an Open Science culture that equally benefits Western and non-Western scholars33 and offer a complementary resource for similar datasets presented in this journal34 or elsewhere35. We also welcome educators to integrate it into under- and postgraduate teaching36 and invite researchers across and beyond the social sciences to use it for original and replication studies. These studies will provide further evidence on the relationship of science and society – both across multiple and within single countries. Such evidence can facilitate recommendations for policy-makers, educators, science communication practitioners, and other stakeholders on how to address societal challenges such as science scepticism and climate change.
Methods
This section explains in detail how the TISP dataset was collected and pre-processed prior to publication. A few of these explanations are also included in other publications of the TISP project31 as per the requirements of respective publication outlets. However, the current article presents the most comprehensive description of the methodological procedures underlying the collection of the TISP dataset.
Ethical compliance
We submitted the study to the Institutional Review Board (IRB) at Harvard University. It received ethical approval from the Area Committee on the Use of Human Subjects at Harvard University in August 2022, which declared it exempt from full IRB review (protocol #IRB22-1046, see https://osf.io/dc5g7). A modified IRB application, which included the full list of countries to be surveyed, was also considered exempt from full IRB review in November 2022 (protocol #IRB22-1046). Moreover, all co-authors made sure the survey was reviewed by their home institution’s IRB in case review was required and approved or declared exempt from full review. They complied with local ethics, norms, and regulations in the countries where the data were collected (see Supplementary Table 1 for an overview). Informed consent was obtained from all participants before taking the survey.
Pre-registration
We sought to increase the reproducibility and transparency of our study in response to recent calls for a “credibility revolution” within and beyond the social and behavioural sciences37. Hence, we followed best Open Science practices and pre-registered at the OSF all methodological procedures underlying the TISP project on 15th November 2022, i.e. prior to collecting data38. The pre-registration employed the most comprehensive OSF template developed by Bowman et al.39 and describes the study design, data collection procedures, variables and sample size, which was rationalised through simulation-based a-priori power analyses40,41: https://osf.io/9ksrj. This pre-registration refers to the main TISP publication31 while we submitted three further pre-registrations for subsequent publications. The methodological procedures underlying the collection of the TISP dataset can be found in the sections Design Plan, Sampling Plan and Variables.
We deviated from the pre-registered procedures as follows: (1) We exceeded the overall target sample size (N = 62,000) as well as the target sample size for some countries (e.g., Germany) thanks to unexpected additional financial resources. We did not reach the target sample size in six countries (Albania, Bangladesh, Bulgaria, Ethiopia, Romania, Uruguay) because local survey panels were too small to recruit enough respondents in all quota groups. (2) The TISP survey covered six countries not mentioned in the pre-registration (Botswana, Cameroon, Côte d’Ivoire, Egypt, Israel, Uganda) as additional collaborators joined the TISP consortium after submitting the pre-registration. Due to unforeseen reasons, such as lack of funding, we could not collect data as planned in five countries (Honduras, Iran, Nepal, Tanzania, Thailand), but exceeded the pre-registered number of countries (k = 68). (3) In order to reach our target sample size and accommodate difficulties with obtaining IRB approval, translating and programming the survey or reaching quota goals in single counties, we extended the data collection period beyond the time span indicated in the preregistration, i.e. until August 2023. (4) We had to open quotas in 13 countries with very skewed population distributions for age (e.g., few citizens aged 60 + years) to reach target sample sizes (Albania, Bangladesh, Bolivia, Botswana, Cameroon, Côte d’Ivoire, Ethiopia, Ghana, Indonesia, Kenya, Nicaragua, Uganda, Uruguay). (5) When computing the post-stratification weights via iterative post-stratification (“raking”), we collapsed adjacent age and education strata in single countries. This was because some age and education strata were empty or sparsely populated in several countries, which makes raking impossible or results in extreme weights when applied to data with sparsely populated strata (see Data pre-processing section).
Participants
The TISP dataset contains complete records of N = 71,922 participants from 88 samples across k = 68 countries. Overall, we collected a total of N = 72,135 complete responses but had to delete 213 records from duplicate respondents. Figure 1 and Table 1 show overviews of valid sample sizes in each country.
The data cover more than a fourth of countries across all inhabited world regions, apart from Sub-Saharan Africa and the Middle East and North Africa, where coverage is lower (21% and 14% respectively). The countries represent 42% of all high-income, 32% of all upper-middle-income, 26% of all lower-middle-income, and 11% of all low-income countries worldwide (according to the World Bank classification42).
In most countries, participants were recruited from online panels by the market research company Bilendi & respondi and their partners. Working with one market research company allowed us to make sure that the same participants were not sampled twice in countries with multiple samples. Convenience samples were not accepted. In countries not covered by Bilendi & respondi, we worked with other data providers (see Table 1).
Participants received vouchers or credit points for finishing the full survey, which they could then redeem or transfer into money. To complete the survey, they had to (1) be at least 18 years old, (2) agree with the terms and conditions of the consent form, (3) belong to a stratum whose quota target had not been met, (4) pass a first attention check of writing “213” into a text box, and (5) pass a second attention check of selecting “strongly disagree” for an extra item in a scale of science-related populist attitudes43.
Procedure
The surveys used crossed quotas for age × gender with balanced target distributions. The age quota had five bins: 20% 18-29 years, 20% 30-39 years, 20% 40-49 years, 20% 50-59 years, 20% 60 years and older. The gender quota had two bins: 50% male, 50% female. It did not include other genders since available population data indicate substantial country differences in how many people identify with, and are willing to disclose, genders other than male or female. Hence, participants who “prefer to self-describe” or “prefer not to say” their gender were not subject to quota requirements (see Measures subsection).
The surveys were programmed with the survey software Qualtrics. The .qsf file of the core survey is available at https://osf.io/qd6f3. All data were collected in online surveys, with the exception of the Democratic Republic of the Congo, where trained interviewers conducted face-to-face interviews and recorded responses in Qualtrics, as this was the only data collection solution available from Bilendi & respondi.
The project leads prepared several template files, guides and tutorials, including the TISP guidebook; manuals for data collection and the submission of country datasets to a secure, non-commercial cloud storage service; a survey template file (.qsf format) to be imported into Qualtrics; and materials for IRB applications. Moreover, the project leads assisted some collaborators in programming the survey with Qualtrics by hosting video-call workshops. These measures increased the quality, validity and comparability across countries.
Data were collected between 30th November 2022 and 27th August 2023 (see Fig. 3 for an overview of survey periods across countries). The median completion time was 18 minutes (10% winsorised M = 21 min, 10% winsorised SD = 11 min, MAD = 10 min, interquartile range = 14 min).
Measures
The questionnaire contained 111 variables (see Fig. 2). Data from a few countries missed some variables and items due to negligence or oversight on the part of local collaborators (see Supplementary Table 2 for an overview). However, this pertains only to a small number of variables in eight countries and therefore only marginally impacts the TISP dataset.
The complete questionnaires in all 37 languages and the English core questionnaire are available at OSF: https://osf.io/sujpn. We recommend that users of the TISP dataset refer to the core questionnaire for the labels and codes of variables, items and response options, because a few local questionnaires contained errors. For example, some collaborators used wrong variable and item labels in the local datasets or assigned wrong codes to the response options. However, these errors only concerned the programming back-end of the survey and did not affect questionnaire texts; hence they did not compromise participants’ understanding of the questions. The errors were corrected when preparing the final dataset, but remain in the Qualtrics exports of the original local questionnaires.
The core questionnaire contained the components described in the following (see Fig. 2 for all questions and response options). Participants were presented with these components in the order in which they are explained below, but the order of questions and items of multi-item scales was randomised. Collaborators were allowed to add further measures at the end of the questionnaire in countries where they collected data. Response data for these additional measures are not included in the dataset presented in this paper.
Informed consent
Participants were asked to carefully read a consent form (approved under IRB protocol #IRB22-1046 at Harvard University), which included general information about the study and the anonymity of the data.
Demographic data I
Participants who agreed to participate in the study indicated their gender (female, male, prefer to self-describe, prefer not to say), age (years) and education (did not attend school, primary education, secondary education, tertiary education).
Attention check I
Participants were asked to write the number “213” into a comment box. Those who failed the attention check were directed to the end of the survey. See the Technical Validation section for exclusion totals by country and overall.
Definition of science and scientists
Participants were presented with a definition of science and scientists: When we say “science”, we mean the understanding we have about the world from observation and testing. When we say “scientists”, we mean people who study nature, medicine, physics, economics, history, and psychology, among other things. This definition was based on the Wellcome Global Monitor35. We added it because in-depth interviews conducted by the Monitor suggested that including a definition improves the reliability of cross-country comparisons.
Exposure to information about science in news media
Participants were asked how often (never – once or more per day) they had come across information about science in four types of news media in the past twelve months: news articles in printed newspapers or magazines; news shows or documentaries on TV or radio; news articles on news websites or in news apps; videos or podcasts on news websites or in news apps.
Exposure to information about science in fictional media
Participants were asked how often (never – once or more per day) they had come across information about science in fictional films or TV series and in fictional books, comics, etc. in the past twelve months.
Exposure to information about science in social media and instant messaging apps
Participants were asked how often (never – once or more per day) they had come across information about science on social media and in instant messaging conversations with friends or family in the past twelve months.
Exposure to information about science in offline settings
Participants were asked how often (never – once or more per day) they had come across information about science in museums, zoos or public talks and in conversations with friends or family outside the Internet and messaging apps in the past twelve months.
Communicating with others about science
Participants were asked how often (never – once or more per day) they had communicated about science in four different ways in the past twelve months: having conversations with friends, family, or co-workers about scientific issues; chatting in messaging apps about scientific issues; sharing or commenting on social media posts about scientific issues; attending public rallies or protests related to scientific issues.
Open-ended questions on beneficiaries of science and reasons to trust scientists
Participants were randomly assigned to one of two open-ended questions. One question asked participants who they think benefits the most from science and why. The second question asked about their opinion on what makes a scientist trustworthy.
Perceived benefits of science
Participants were asked how much they believe that scientific research benefits people like themselves in their country (not at all – very strongly) and which world region benefits the most and the least from the work that scientists do (Africa, Asia, Australia and Oceania, Europe, Latin America, North America).
Desired and perceived goals of science
Participants were asked how much scientists should prioritise tackling four goals (very low priority – very high priority) and how strongly they believe that science aims to tackle these goals (not at all – very strongly): improve public health; solve energy problems; reduce poverty; develop defence and military technology.
Normative perceptions of science and society
Participants indicated their agreement (strongly disagree – strongly agree) with six statements about expectations towards the role of science in politics and society, e.g. “Scientists should be more involved in the policy-making process”. Five of these statements were adopted from Cologna et al.44.
Willingness to be vulnerable to scientists
We used three items to measure participants’ willingness to be vulnerable to scientific guidance (not at all – very strongly), e.g. when making lifestyle choices related to science. Willingness to be vulnerable to others has been conceptualised as a measure of behavioural trust because it reflects the ceding of authority23.
Perceived trustworthiness of scientists
Trustworthiness of scientists was assessed with twelve questions that are based on Besley et al.23 and cover four essential dimensions of trust in scientists: competence, integrity, benevolence and openness. The questions used semantic differentials ranging from very inexpert (very dishonest, not concerned about people’s well-being, not open to feedback etc.) to very expert (very honest, very concerned about people’s well-being, very open to feedback etc.; see Fig. 2). Information on the psychometric properties of the trustworthiness scale, such as its internal consistency, dimensional structure, measurement invariance and convergent validity, can be found in the Technical Validation section.
We preferred a multidimensional measure of trust in scientists over unidimensional or single-item measures to capture the multiple conceptual components of trust in science22. We opted for the four-dimensional approach of Besley et al.23 instead of three-dimensional trustworthiness measures like the Muenster Epistemic Trustworthiness Inventory (METI)45, because it lacks the openness dimension. Being perceived as open to feedback, willing to be transparent, and considerate of other views are important for scientists in modern societies, where scholars are increasingly expected to be receptive to public demands and engage in dialogical science communication21.
Trust in scientific methods
Participants indicated how much they agreed that scientific research methods are the best way to find out if something is true or false (strongly disagree – strongly agree)46.
Confidence in scientists
Participants were asked how much confidence they have that scientists act in the best interests of the public (no confidence at all – a great deal of confidence)47.
Outspokenness about science
We used three items to measure how outspoken participants are about scientific issues, e.g., “I will share my opinions about scientific issues, regardless of what others think of them” (strongly disagree – strongly agree). These were based on McKeever et al.25 but reworded so that they referred to scientific issues.
Science-related populist attitudes
Science-related populist attitudes were assessed with the SciPop Scale24, which measures to what extent individuals believe that scientists represent a corrupt academic elite that allegedly ignores the common sense of ‘ordinary people’9. The SciPop Scale asks for the level of agreement with eight statements that capture the four conceptual dimensions of science-related populist attitudes, i.e. positive conceptions of an ordinary people (“Ordinary people have in common that they trust their common sense in everyday life” and “Ordinary people are of good and honest character”), negative conceptions of an academic elite (“Scientists are only interested in their own advantage” and “Scientists are in cahoots with politicians and businesses”), demands for decision-making sovereignty (“Ordinary people should have influence on the work of scientists” and “Ordinary people should be involved in decisions about the topics scientists research”) and demands for truth-speaking sovereignty (“Ordinary people should trust their life experience more than the recommendations of scientists” and “Our society should rely more on common sense than on scientific studies”) on 5-point Likert scales (strongly disagree – strongly agree). Information on the psychometric properties and measurement performance of the SciPop Scale in the TISP data can be found in the Technical Validation section.
Attention check II
We integrated a second attention check into the SciPop Scale. It asked participants to select the response option “strongly disagree”. Participants who did not select “strongly disagree” were directed to the end of the survey. See Technical Validation section for exclusion totals.
Social dominance orientation
To assess social dominance orientation (SDO), we asked participants how much they oppose or favour four statements adopted from Pratto et al.26, e.g. “In setting priorities, we must consider all groups” (extremely opposed – extremely favour).
Trust in climate scientists
Participants were asked how much they trust scientists in their country who work on climate change (not at all – very strongly).
Emotions about climate change
Participants reported to what extent climate change makes them feel nine emotions: helpless; anxious; optimistic; angry; guilty; ashamed; depressed; pessimistic; indifferent (not at all – very strongly). Most of the nine items were based on established measures for climate change emotions, such as those developed by Hogg et al.48 and Searle and Gow49.
Perceptions of government action on climate change
Following Hickman et al.50, participants indicated their level of agreement with seven statements about government action on climate change, e.g. “My government is doing enough to avoid climate change” (strongly disagree – strongly agree).
Support for environmental policies
Participants indicated how much they support five environmental policies: raise carbon taxes on gas and fossil fuels or coal; expand infrastructure for public transportation; increase the use of sustainable energy such as wind and solar energy; protect forested and land areas; increase taxes on carbon intense foods (not at all – very much, not applicable).
Perceptions of extreme weather events
Participants indicated to what extent they believe that climate change has increased the impact of six weather events over the last decades: floods; heatwaves; heavy storms; wildfires; heavy rain; droughts (not at all – very much). They also indicated to what extent they expect that climate change will increase the impact of these events in the future (not at all – very much).
Demographic data II and political and religious views
Participants indicated their household’s annual net income (in local currency), their political orientation on the liberal-conservative spectrum (strongly liberal – strongly conservative, I don’t know) and on the left-right spectrum (strongly left-leaning – strongly right-leaning, I don’t know), as well as their religiosity (not religious at all – very strongly religious), and whether they live in a rural or urban area (rural, urban).
Translations
The questionnaire was prepared in 37 languages. The core questionnaire was developed in English and was used in countries where English is a widely spoken language. In other countries, the questionnaire was translated into local languages and dialects: Albanian, Egyptian Arabic, Modern Arabic, Standard Arabic, Bengali, Bulgarian, Czech, Danish, Dutch, Filipino, Finnish, French, Georgian, German, Greek, Hebrew, Hungarian, Indonesian, Italian, Japanese, Kazakh, Korean, Mandarin (simplified), Mandarin (traditional), Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Turkish and Ukrainian (see Table 1 for an overview). The survey was usually conducted in a widely spoken language, and in some multilingual countries such as Switzerland, respondents could choose between different national languages.
Most translations were done by researchers from the countries where the surveys were conducted. This allowed us to account for local specificities, such as the Japanese custom to indicate income in “man-Yen”, i.e. in ten thousands of Yen. Collaborators were instructed to ask for permission from the project leads before making any adjustments that could potentially affect comparability across countries. More substantial changes – in particular, the use of gender-neutral language instead of masculine (pro)nouns in countries like Germany – also had to be approved by the project leads.
To maintain the accuracy and consistency of translations, many TISP collaborators cross-checked translations among each other, carried out back-translations, consulted external experts, used validated existing translations when available (e.g., of the SciPop Scale24) and worked together to coordinate translations of questionnaires that were used in multiple countries (e.g., the German translation was used in Germany, Switzerland and Austria). Collaborators were advised to apply the highest standards when preparing the translations, such as back-translations by independent researchers. However, the project leads did not require them to employ external back-translations in order to facilitate the project progress and accommodate limited budgets.
These measures enabled us to achieve as much semantic invariance as possible across different translations. However, there are still cross-cultural differences in the meaning of key terms like “science”. For example, the Polish translation “nauka” also means learning, the German translation “Wissenschaft” also includes the humanities, and the Japanese translation “科学” may also be associated with technology and engineering. To mitigate these differences, we placed a definition of the terms “science” and “scientists” at the beginning of the questionnaire. It paraphrased the English meaning of the term, which includes the natural sciences but excludes the arts and humanities (see Measures section). We also gave participants examples for “scientific issues” (climate change, vaccination, nutrition, new technologies) and “public rallies or protests related to scientific issues” (COVID-19 protests, Fridays for Future demonstrations, March for Science) to facilitate a common understanding of these terms.
Data pre-processing
This section describes how we pre-processed the TISP data to obtain a cleaned dataset without weights (file ds_main in the 01_data/survey-data folder of the OSF repository) and the analysis-ready dataset including post-stratification weights (ds_final in the same folder)51. We share both these datasets as well as the raw data (ds_full in the same folder), as explained the Data Records section. Figure 4 presents a flow chart visualising the pre-processing steps.
Merging and exclusion of non-completes
All research groups of the TISP consortium submitted the collected data to the project leads, including data from participants who did not finish the survey. The final TISP dataset was prepared in the following steps. First, we merged all 88 local datasets into a single dataset (ds_full, see Data Records section). We then excluded the 94,966 respondents who did not complete the survey because they cancelled participation during the survey, were filtered out as their gender or age quota were already met or because they did not pass one of the two attention checks.
Exclusion of duplicate respondents
Second, we excluded 213 participants who completed the survey more than once despite countermeasures (e.g., IP address checks). We identified these participants by their panel IDs, which they had been assigned by the survey companies when entering the survey, retained only the first complete record for each duplicate respondent and deleted all subsequent records.
Outlier exclusion
Third, we removed extreme outlier values for age and household income. Age outliers were defined as values less than 18 and more than 100. Income outliers were defined as values that were smaller than zero (implausible), equal to zero (forbids logarithmic transformation as log 0 is undefined, see Variable transformations section) or outside 5 × the interquartile range of the log-transformed income distribution within each country after exclusion of values smaller than zero or equal to zero (which is much more conservative than established outlier definitions52 and affects, for example, only highly implausible values of well over 1 billion USD in some countries). This led to the removal of the age values of 8 participants and the removal of the income values of 2,457 participants (1,365 participants indicated income values equal to or less than 0; and 1,092 participants indicated income values outside 5 × the interquartile range). Users who prefer other outlier exclusion criteria or no exclusion at all can adjust the R code to their preferences (file 01_setup.R) and run it on the raw dataset (ds_full).
Variable transformations
Fourth, we transformed participants’ annual household income. We converted all values from local currencies to U.S. dollars, using the exchange rates of the day the data were collected. Because almost all countries’ data followed a Pareto distribution, we log-transformed the converted income values, which is beneficial to the robustness of linear regressions that users of the TISP dataset might want to apply53. Both the original and transformed income data are contained in the pre-processed datasets (ds_main and ds_final, see Data Records section)51.
Post-stratification weights
Fifth, we used the R package survey (v4.4-2)54 to compute post-stratification weights for the analysis-ready dataset (ds_final). These ensure that statistical analyses with the TISP data will estimate parameters that are representative for target populations in terms of gender, age and education and have precise standard errors (SEs). We used iterative post-stratification55 known as “raking” to compute three kinds of weights, i.e. (1) post-stratification weights at country level, (2) post-stratification weights at global level and (3) rescaled post-stratification weights for multilevel analyses (see Data Records section for information on when to use which weight).
We first stratified each country sample by gender (female/male), age groups (18–29/30–39/40–49/50–59/60+ years) and education levels (none or primary education/secondary education/tertiary education). We originally planned to distinguish a no education and a primary education stratum. However, we had to collapse these into a none or primary education stratum, because there were several countries without respondents with no education, making post-stratification impossible. This was a necessary deviation from the preregistration.
We then used raking to match gender, age and education distributions of all country samples to each country’s population margins. Population margins for gender and age were retrieved from the World Population Prospects 2022 of the United Nations56 (https://population.un.org/wpp/Download/Files/5_Archive/WPP2022-Excel-files.zip). Population margins for education were retrieved from the 2021 Barro-Lee dataset57,58, which contains data on educational attainment for all countries included in the TISP project except Georgia, Ethiopia and Nigeria (https://barrolee.github.io/BarroLeeDataSet/BLData/BL_v3_MF1564.xls). For Georgia, we used 2019 data from the database of the United Nations Economic Commission for Europe59 (https://w3.unece.org/PXWeb2015/sq/3290abae-0120-418f-a681-132d4da8f088). For Ethiopia and Nigeria, we used 2011 and 2006 data from the UNESCO Institute for Statistics60 (https://uis.unesco.org/sites/default/files/documents/bdds/022024/SDG.zip).
Some age and education strata were empty or sparsely populated in several countries, because collaborators had to relax age quotas or oversampled individuals with tertiary education to reach their target sample size. However, raking is not feasible with empty strata and results in extreme weights when applied to data with sparsely populated strata. Therefore, we collapsed empty or sparsely populated age and education strata with adjacent strata in cases where a stratum contained less than 5% of respondents within a country.
We had to exclude 2,388 participants before raking: This was either because they had missing values for gender, age and education (raking requires participant data for all post-stratification variables) or because they identified with genders other than female or male (the World Population Prospects 2022 do not contain population margins for them). The ds_final dataset including the raked weights therefore contains only N = 69,534 participants, whereas the ds_main dataset without weights retains participants for whom raking was not possible and thus contains the complete valid sample of N = 71,922.
The raking procedures yielded the (1) post-stratification weights at country level. Next, we computed sample size weights for each country, which accounted for different sample sizes, and multiplied them with the post-stratification weights at country level to obtain the (2) post-stratification weights at global level. For weighted multilevel analyses with R’s lme4 package61, we prepared (3) rescaled post-stratification weights created with the rescale_weights() function of the datawizard package (v0.10.0)62, which implements an algorithm proposed by Asparouhov63 and Carle64. For more details, see the R code shared with the dataset.
Combining post-hoc weighting with balanced quota sampling has several advantages: The balanced quotas help collect enough data for underrepresented and hard-to-reach participants (e.g., 18-29 y/o men in Switzerland or 50 + y/o women in Ghana). This benefits statistical analyses, whose robustness may suffer when applied to barely sampled individuals65. Post-hoc weighting spared us the effort to impose representative quotas and allowed us to correct sample distributions even for non-quoted demographic characteristics like education. This is important from a validity standpoint (we compensated to some degree that participant panels in some countries like India were overpopulated by higher-educated individuals, see Supplementary Table 4) and a budget perspective (imposing education quotas would have increased the duration and costs of data collection)66. However, these advantages come at the cost of some drawbacks, i.e. (1) exclusion of participants for which post-stratification data is not available; (2) a small number of cases with large weights in a few countries like Nigeria; (3) reduced precision in countries where quota targets had to be relaxed or where adjacent strata needed to be collapsed65,66,67,68. Other data collection procedures, such as probability sampling, would have compensated some of these limitations, yet they have other disadvantages, such as high costs and implementation difficulties at a global scale such as that of the TISP project68.
Sample characteristics
The cleaned dataset contains 71,922 participants from 68 countries (ds_main, see Data Records section). Table 2 shows the characteristics of the unweighted and the weighted global samples. For sample characteristics across countries, see Tables 3–5 (weighted) and Supplementary Tables 3–5 (unweighted).
Data Records
The TISP dataset is available at a dedicated OSF repository: https://osf.io/5c3qd51. The repository includes a wiki with detailed instructions for users and contains the following folders:
-
01_data includes three versions of the TISP dataset and respondent ID data for duplicate checks (./survey-data), demographic data of target populations for computing the post-stratification weights (./population-data) and conversion rates for transforming local currencies to USD (./currency-data).
-
02_code includes R code for replicating the data pre-processing procedures and the validation analyses (see Technical Validation section).
-
03_models includes pre-computed lavaan models69 used in the validation analyses and a svydesign object54 of the analysis-ready dataset, all in .rds format.
-
04_figures includes all figures in high resolution.
-
05_survey-materials includes all survey materials, i.e. the questionnaires, guides, manuals and templates.
-
06_irb-documents includes the official documents certifying ethical approval from the Area Committee on the Use of Human Subjects at Harvard University as well as materials for collaborators in case they needed to seek IRB approval.
Other studies have already used the TISP dataset. For example, Cologna et al.31 used it for a global analysis of public trust in scientists. However, they included only a small subset of variables, whereas the TISP dataset contains several more measures. They conducted comprehensive descriptive and multivariate analyses to test pre-registered research questions and hypotheses, which are far beyond the scope of the current article; we only present an overview of the sample characteristics (see Methods section) and psychometric properties of select measures (see Technical Validation section).
The datasets
The 01_data folder in the OSF repository includes three versions of the TISP dataset51. It contains (1) the raw dataset before any cleaning and transformations (N = 167,101, filename ds_full), (2) the cleaned dataset without weights (N = 71,922, filename ds_main), and (3) an analysis-ready dataset that includes the post-stratification weights (N = 69,534, filename ds_final). See the Methods section and Fig. 4 for the data pre-processing procedures used to prepare these datasets.
We share each of the datasets in .rds, .sav, and .csv formats. It is recommended to use the .rds files where response values are labelled. The .csv files are semicolon-delimited and use UTF-8 encoding with a Bit Order Mark (BOM), so they can be imported into Microsoft Excel, for example, with correct encoding of non-ASCII characters (missing values coded as “NA”). Open-ended answers (see Methods section) are provided in the languages in which they were recorded, so that users of the TISP dataset can analyse raw answers and employ translation software or services of their choice.
Researchers who wish to conduct statistical analyses that estimate parameters that are representative for target populations in terms of gender, age and education and have correct variances and standard errors should use the analysis-ready dataset. It contains three kinds of post-stratification weights (see Methods section and Fig. 4).
-
1.
WEIGHT_CNTRY: This variable contains the post-stratification weights at country level, to be used for weighted analyses with single country samples.
-
2.
WEIGHT_GLOBL: This variable contains the post-stratification weights at global level, to be used for weighted analyses with the entire analysis-ready dataset.
-
3.
WEIGHT_MLVLM: This variable contains the rescaled post-stratification weights for weighted multilevel analyses with R’s lme4 package61. Note that svydesign objects, which R users might prefer, cannot be included in multilevel modelling by means of R’s survey package v4.4-254.
Using the post-stratification weights at country and global level will give point estimates (e.g., mean values, regression coefficients, etc.) that are representative in terms of gender, age and education. To obtain correct variances and standard errors of point estimates, one should use either a svydesign object created with the svydesign() function of R’s survey package54 or the rescaled post-stratification weights. We pre-computed a svydesign object of the TISP dataset, which can be found in the repository (folder 03_models) or reproduced by users with the R code provided.
Survey materials
The materials available at the OSF repository also include all survey materials: the TISP core questionnaire in English, all 88 local questionnaires, the Qualtrics file in .qsf format and instructions for collaborators (data collection manual, data submission guide and the TISP guidebook).
IRB documents
We also share the documents certifying ethical approval from the Area Committee on the Use of Human Subjects at Harvard University as well as template materials prepared for local IRB applications.
Technical Validation
We employed several procedures to assure the validity of the TISP dataset. The survey used questions and scales that were based on established conceptual models and were validated in multiple prior studies23,24,25,35,44,47. It included attention checks to reduce satisficing and straight-lining, i.e. common problems of survey studies70, and was designed with an international advisory board of nine experts on public opinion and communication about science, environmental psychology, the history and sociology of science and survey methods. To enhance the invariance of questionnaire performance across countries and languages, we drew on cross-checked translations by local collaborators who were native speakers and familiar with the research topic and study context. To ensure the integrity of the data collection process, the project leads pre-registered sample size rationales and data pre-processing steps before fielding surveys, obtained ethical approval from multiple IRBs, provided templates, guides, tutorials and 1-on-1 assistance to collaborators, and required all co-authors to sign an ethical agreement. The entire TISP consortium, including the advisory board, was also involved in internal peer review of project outputs. An independent data scientist as well as TISP collaborators highly proficient in statistical analyses also reviewed the statistical code for preparing the dataset and verifying its reliability.
We took three further measures to validate the quality of the TISP dataset as detailed below. (1) We conducted a pre-test prior to the main survey to validate the measures used in the questionnaire. (2) We inspected if the attention checks had similar performance across countries and confirmed that they filtered demographic groups of respondents known to be less attentive to surveys. (3) We assessed the internal consistency, factor structures, measurement invariance and convergent validity of all four scales that we adopted from prior research or, in the case of the 12-item scale measuring trust in scientists, developed for the purpose of the TISP study.
Pre-test
A pre-test with N = 401 participants was conducted in the United States in October 2022. Average completion time was 14 minutes. The questionnaire was slightly modified to improve the comprehensibility of questions and the survey flow, and two questions were added to the final questionnaire. Pre-test data are not included in the datasets presented in this article, but are available at https://osf.io/wj34h.
Attention check performance
The questionnaire contained two attention checks (see Methods section). 4% of respondents who reached the first attention check did not pass it. 24% of participants who reached the second attention check did not pass it. This indicates that both attention checks – particularly the second – clearly increased data quality: They filtered numerous respondents who were likely too inattentive to provide meaningful data and might thus have compromised the reliability of the TISP data.
The attention checks also harmonized data quality across countries and polling companies. This was necessary as respondents from Brazil, India or Türkiye often failed them, whereas participants from Romania, Uruguay or the United Kingdom had much higher baseline attentiveness levels (see Supplementary Table 6).
We also validated the performance of the attention checks by verifying that they filtered respondents who are typically prone to fail such checks, i.e. people who are younger, male and lower educated43. To do so, we fitted logistic multilevel regression models with random intercepts across countries which predicted failing with age, gender and education, i.e. the three demographic characteristics that were measured before the first attention check and were therefore available for all participants. Unstandardised and standardised regression estimates (within-country scaled predictors) show that failing the first attention check was marginally more likely if participants were younger (b = −0.004, β = −0.055, OR = 0.946, SE = 0.015, z = −3.606, p < 0.001) and clearly more likely if they had no tertiary education (b = −0.478, β = −0.224, OR = 0.798, SE = 0.015, z = −14.931, p < 0.001). Gender was also related to failing, with males being slightly more likely to fail the first attention check than females (b = 0.008, β = 0.134, OR = 1.143, SE = 0.008, z = 17.001, p < 0.001). Failing the second attention check was more likely among participants who are male (b = 0.010, β = 0.132, OR = 1.141, SE = 0.007, z = 20.304, p < 0.001), younger (b = −0.031, β = −0.432, OR = 0.649, SE = 0.008, z = −52.292, p < 0.001) and lower educated, with participants who completed tertiary education being more attentive than participants who completed only primary or secondary education (b = −0.295, β = −0.139, SE = 0.007, OR = 0.870, z = −18.255, p < 0.001). These results indicate that the attention checks worked well and allowed us to collect similarly informative data across different demographic groups.
Scale validations
We tested the psychometric properties and measurement performance of the 12-item scale of trust in scientists23, the 8-item scale of science-related populist attitudes24, the 3-item scale of outspokenness about science25, and the 4-item scale of SDO26, so as to provide users of the TISP dataset with information about their validity. These tests included (a) internal consistency estimates and comparisons with consistency estimates from previous research, (b) assessments of the dimensional structures via parallel analysis, exploratory factor analysis (EFA) and multi-group exploratory structural equation modelling (ESEM)71, (c) measurement invariance tests via confirmatory factor analysis (CFA) and (d) convergent validity analyses.
Perceived trustworthiness of scientists
The 12-item scale measuring perceived trustworthiness of scientists may be aggregated to a single score by computing the arithmetic mean of all response values for each respondent, with higher values indicating higher perceived trustworthiness (weighted M = 3.62, SD = 0.70, range: 1 – 5; see R code for M and SD across countries). Overall, the scale shows excellent internal consistency, captures the four trustworthiness dimensions rather distinctively, exhibits acceptable measurement performance in the global sample but limited invariance across countries and has high convergent validity.
Internal consistency
Scale consistency in the global sample was excellent, with Cronbach’s α = 0.93 and ω = 0.95. Such high estimates seem typical for this measure: Our pre-test survey showed values of α = 0.95 and ω = 0.96, and previous studies using similar scales like the METI45 also found high very estimates of α = 0.9472 and α = 0.9573. This suggests that some scale items may be somewhat redundant74 in some countries like the United States (estimates across countries can be replicated with the R code available at our OSF repository). However, shortening the scale, which is a preferred solution for item redundancy74, was no option for us, as we would not want to risk a loss of scale reliability in countries with lower estimates (e.g., Czech Republic, where α = 0.87 and ω = 0.91). Moreover, we sought to maintain sufficient subscale consistency – which had likely been reduced had we removed items from the scale – so as to accommodate dataset users who wish to analyse single trustworthiness dimensions.
Dimensional structure
Mardia’s test showed that multivariate normality could not be assumed (Mardia skewness = 16,773, Mardia kurtosis = 256, p < 0.001). Therefore, the parallel analysis and the EFA used principial axis factoring (PA) instead of maximum likelihood factoring (ML), as PA factoring outperforms ML factoring when the normality assumption is violated75. Polychoric parallel analysis did not find the four dimensions competence, integrity, benevolence and openness, but suggested five factors. However, EFA results showed that the items formed plausible factors that largely correspond with those four dimensions – even if there were a few cross-loadings due to which the benevolence and openness dimensions were less distinct (see Supplementary Table 7). A multilevel EFA model implemented via multi-group ESEM71 had good fit (χ² = 7,421, df = 3,433, p < 0.001; CFI = 0.983, TLI = 0.978, RMSEA = 0.043, SRMR = 0.039).
Measurement invariance
CFA that tested a model with four latent factors, each predicting its three corresponding items, indicated moderate model fit (χ² = 5,840, df = 48, p < 0.001; CFI = 0.971, TLI = 0.960, RMSEA = 0.053, SRMR = 0.025). Multi-group CFAs yielded slightly worse results (χ² = 12,188, df = 3,264, p < 0.001; CFI = 0.962, TLI = 0.948, RMSEA = 0.066, SRMR = 0.031 for the configural model). They suggested that we can assume configural invariance for the trustworthiness scale across countries, but not metric or scalar invariance (p < 0.001), which is typical for multi-country models.
Convergent validity
We tested the convergent validity of the scale by assessing zero-order correlations of the arithmetic mean of all twelve items with other constructs that were included in the TISP survey, are conceptually related to trust in scientists and were found to be associated with it in prior research: Trustworthiness was negatively related with perceptions that scientists are biased by personal and third interests, which is in line with existing findings76 (see Table 6). Plausibly, we also found substantial positive correlations of the trustworthiness score and confidence that scientists act in the best interests of the public77, willingness to be vulnerable to scientists23 and the belief that scientific results should be integrated into policy-making78. This demonstrates high convergent validity of the trustworthiness measure.
Science-related populist attitudes
There are different ways to aggregate responses to the eight items of the SciPop Scale24 into a single score that indicates affinity or opposition to science-related populism, such as taking the average of all response values (“Bollen approach”) or classifying participants as populist vs. non-populist based on their responses (“Sartori approach”)79. The authors of the SciPop Scale recommend the “Goertz approach”80. This approach suggests that the smallest of the four dimension scores determines someone’s net support for science-related populism, regardless of the magnitude of the other three dimension scores. It accounts for the conceptual premise that all components of science-related populism have to be concurrently present within a person to diagnose science-related populist attitudes, whereas the absence of one or more components would disqualify someone to be classified as a proponent of science-related populism (see Mede et al.79 and Wuttke et al.80 for more details). The Goertz approach has thus become a preferred procedure in research on both science-related and political populism11,81,82,83. We therefore applied this approach when assessing the psychometric properties and measurement performance of the SciPop Scale in the TISP dataset: First, we calculated unweighted arithmetic means of the response values for each of the four 2-item components of the scale (see Methods section). Second, we took the lowest of these four means as an indicator of someone’s overall support for science-related populism (weighted M = 2.32, SD = 0.91), with higher values indicating stronger support (range: 1 – 5). In sum, our validity tests indicate high internal consistency of the SciPop Scale, confirm the four-dimensional factor structure, demonstrate good performance in the global sample despite somewhat limited measurement invariance and suggest sufficient convergent validity.
Internal consistency
The internal consistency of the SciPop Scale was fairly high (α = 0.79 and ω = 0.87). Reliability estimates were within the range estimates in previous studies, which find values of α = 0.75 in Taiwan and up to α = 0.90 in Austria, for example84.
Factor structure
Polychoric parallel analysis confirmed the four-dimensional conceptualisation of the SciPop Scale. Oblique polychoric EFA showed that the eight items formed four plausible factors that correspond with the four conceptual dimensions of science-related populist attitudes (see Supplementary Table 8). Mardia’s test showed that multivariate normality of the SciPop Scale could not be assumed (Mardia skewness = 3,992, Mardia kurtosis = 122, p < 0.001), so the parallel analysis and the EFA used PA factoring instead of ML factoring. An ESEM-based multilevel EFA model had excellent fit (χ² = 1,845, df = 1,208, p < 0.001; CFI = 0.992, TLI = 0.987, RMSEA = 0.029, SRMR = 0.026).
Measurement invariance
A CFA model with four second-order factors, each predicting its two corresponding items, and one first-order factor had satisfactory fit (χ² = 1,449, df = 16, p < 0.001; CFI = 0.976, TLI = 0.958, RMSEA = 0.046, SRMR = 0.033). Multi-group CFAs showed similar results (χ² = 3,510, df = 1,088, p < 0.001; CFI = 0.968, TLI = 0.944, RMSEA = 0.060, SRMR = 0.037 for the configural model). They suggested that we can not assume metric or scalar invariance across countries (p < 0.001).
Convergent validity
The SciPop Scale exhibits sufficient convergent validity: We found significant negative correlations of science-related populist attitudes with the extent to which participants perceive scientists to have integrity (see Table 6), replicating recent findings85. Support for science-related populism was also negatively associated with trust in scientific methods24. We found significant positive correlations with right-leaning political orientation and SDO, which corresponds with prior research79,86.
Outspokenness about science
The 3-item scale measuring outspokenness about science may be aggregated to a single score by computing the arithmetic mean of the response values for each respondent, with higher values indicating higher outspokenness (weighted M = 3.87, SD = 0.98, range: 1 – 5). The psychometric tests indicate strong internal consistency, unidimensionality, mediocre measurement invariance and good convergent validity.
Internal consistency
We find that the outspokenness scale has very high internal consistency in our sample, with α = 0.89 and ω = 0.89. This is within the range of estimates in previous studies, which report values between α = 0.7987 and α = 0.9525 for slightly different versions of the scale.
Factor structure
We confirmed the unidimensionality of the scale using polychoric parallel analysis, which showed that all three items load on one common factor. Multivariate normality could not be assumed (Mardia skewness = 7,522, Mardia kurtosis = 171, p < 0.001), so the parallel analyses relied on PA factoring. An ESEM-based multilevel EFA model had very good fit (χ² = 232, df = 135, p < 0.001; CFI = 0.997, TLI = 0.995, RMSEA = 0.034, SRMR = 0.019).
Measurement invariance
CFA indicated mixed results: Some indicators indicated that a one-factor model had good fit according to common rules of thumb88, but others did not (χ² = 347, df = 1, p < 0.001; CFI = 0.985, TLI = 0.955, RMSEA = 0.090, SRMR = 0.112). This is perhaps partly because we had to fix the variance of the latent factor to 1, otherwise the model would have been saturated with df = 0. Multi-group CFAs showed even less ideal results (χ² = 597, df = 68, p < 0.001; CFI = 0.981, TLI = 0.944, RMSEA = 0.111, SRMR = 0.112 for the configural model). They indicated that we can not assume metric or scalar invariance (p < 0.001).
Convergent validity
Correlations of the outspokenness scale and other constructs measured in the TISP survey are consistent with previous studies, which confirms its convergent validity: Outspokenness was positively associated with communicating with others about science25, exposure to science infomation in messaging apps81 and having conversations about science with friends or family outside the Internet89 (see Table 6).
Social dominance orientation
The 4-item scale measuring SDO may be aggregated by computing the arithmetic mean of the response values for each respondent, with higher values indicating stronger SDO (weighted M = 3.62, SD = 1.76, range: 1 – 10). The psychometric tests indicate mediocre internal consistency, ambiguous results regarding the dimensionality and low measurement invariance, but satisfactory convergent validity.
Internal consistency
The SDO scale exhibits mediocre consistency in the TISP dataset (α = 0.57 and ω = 0.59). However, relatively low estimates like these are common for this scale: Previous comparative research found estimates as small as α = 0.34 in Türkiye, α = 0.44 in Lebanon, α = 0.48 in Taiwan and Indonesia, α = 0.52 in Serbia and South Africa and α = 0.53 in the Netherlands26. Further studies suggest slightly better reliability in countries like Austria, the Czech Republic, Germany, France, Hungary, Italy and Poland, where estimates ranged from α = 0.69 to α = 0.74 and ω = 0.62 to 0.7690. This is largely in line with what we find for these countries.
Factor structure
Dimensionality tests of the SDO scale gave mixed results: The Kaiser criterion suggested unidimensionality as we find one factor with an eigenvalue greater than one91. This replicates previous findings90. However, parallel analysis and EFA based on Pearson correlations indicate two factors, with the two reverse-worded items loading on the first factor and the two non-reversed items loading on the second factor, which has been described as a common methodological artifact92. The parallel analysis and the EFA used PA factoring since multivariate normality could not be assumed (Mardia skewness = 13,278, Mardia kurtosis = 49, p < 0.001). An ESEM-based multilevel EFA model had bad fit, which is likely due to the somewhat ambiguous factor structure and corresponds with the mediocre reliability of the scale (χ² = 7172, df = 333, p < 0.001; CFI = 0.644, TLI = 0.570, RMSEA = 0.181, SRMR = 0.097).
Measurement invariance
A one-factor CFA model did not have good fit (χ² = 4,075, df = 3, p < 0.001; CFI = 0.728, TLI = 0.455, RMSEA = 0.179, SRMR = 0.095). We needed to constrain the variance of the item “We should not push for group equality” to 1 in order to avoid Heywood cases93. Multi-group CFA models also showed bad fit (χ² = 6,310, df = 201, p < 0.001; CFI = 0.682, TLI = 0.363, RMSEA = 0.220, SRMR = 0.083 for the configural model). One can not assume metric or scalar measurement invariance across countries (p < 0.001), which is conform to findings of previous research90. The poor fit of the one-factor models is likely a result of the unstable factor structure of the SDO scale in the TISP dataset. A two-factor CFA model performs clearly better (χ² = 889, df = 2, p < 0.001; CFI = 0.941, TLI = 0.822, RMSEA = 0.103, SRMR = 0.034), but fitting two-factor multi-group CFAs failed due to Heywood cases and non-identification.
Convergent validity
Consistent with extant findings, we find positive correlations of SDO with right-leaning political orientation90, conservativism94, support for research on developing defence and military technology95, and opposition to laissez-faire capitalism, here operationalised as endorsement of taxes on carbon intense foods96 (see Table 6). This is evidence that the SDO scale has sufficient convergent validity.
Usage Notes
Online repository
The TISP dataset, additional data, R code, pre-computed statistical models, additional materials and high-resolution versions of the figures presented in this article are available at the OSF: https://osf.io/5c3qd (see Data Records section)51. The datasets are ready to use with popular statistical software like R (recommended), IBM SPSS Statistics and Microsoft Excel.
The OSF repository contains a wiki with information on the content of the folders and further instruction on how to use the files. The R code accompanying the datasets (folder 02_code) includes detailed annotations so that users can easily retrace and replicate the data-preprocessing procedures and validation analyses.
Online dashboard
We developed a web-based data visualisation dashboard using R shiny97. Users may explore data on key variables of the TISP project across countries and subsamples. The dashboard is under development. It can be accessed at: https://tisp.shinyapps.io/TISP/.
Code availability
All data as well as the R code, and pre-computed models underlying the analyses described in this article, and Figs. 1–4 in high resolution are available at the Open Science Framework: https://osf.io/5c3qd.
References
Owen, R., Macnaghten, P. & Stilgoe, J. Responsible research and innovation: From science in society to science for society, with society. Sci Public Policy 39, 751–760, https://doi.org/10.1093/scipol/scs093 (2012).
Oreskes, N. (ed.) Why trust science? (Princeton University Press, 2019).
Schäfer, M. S. Mediated trust in science: concept, measurement and perspectives for the ‘science of science communication’. J Sci Commun 15; https://doi.org/10.22323/2.15050302 (2016).
Gauchat, G. W. The legitimacy of science. Annu. Rev. Sociol. 49, 263–279, https://doi.org/10.1146/annurev-soc-030320-035037 (2023).
Kraft, P. W., Lodge, M. & Taber, C. S. Why people “don’t trust the evidence”. Motivated reasoning and scientific beliefs. Ann Am Acad Pol Soc Sci 658, 121–133, https://doi.org/10.1177/0002716214554758 (2015).
Rekker, R. The nature and origins of political polarization over science. Public Underst Sci 30, 352–368, https://doi.org/10.1177/0963662521989193 (2021).
Chinn, S., Hasell, A., Roden, J. & Zichettella, B. Threatening experts: Correlates of viewing scientists as a social threat. Public Underst Sci; https://doi.org/10.1177/09636625231183115 (2023).
Batelaan, K. ‘It’s not the science we distrust; it’s the scientists’: Reframing the anti-vaccination movement within Black communities. Global Public Health 17, 1099–1112, https://doi.org/10.1080/17441692.2021.1912809 (2022).
Mede, N. G. & Schäfer, M. S. Science-related populism: Conceptualizing populist demands toward science. Public Underst Sci 29, 473–491, https://doi.org/10.1177/0963662520924259 (2020).
Eberl, J.-M., Huber, R. A. & Greussing, E. From populism to the “plandemic”: Why populists believe in COVID-19 conspiracies. J Elect Public Opin Parties 31, 272–284, https://doi.org/10.1080/17457289.2021.1924730 (2021).
Mede, N. G., Schäfer, M. S. & Metag, J. Cognitio populi – Vox populi: Implications of science-related populism for communication behavior. Communications; https://doi.org/10.1515/commun-2022-0059 (2023).
West, J. D. & Bergstrom, C. T. Misinformation in and about science. Proc Natl Acad Sci USA 118; https://doi.org/10.1073/pnas.1912444117 (2021).
Sarathchandra, D., Haltinner, K. & Grindal, M. Climate skeptics’ identity construction and (dis)trust in science in the United States. Environmental Sociology 8, 25–40, https://doi.org/10.1080/23251042.2021.1970436 (2022).
Fage-Butler, A., Ledderer, L. & Nielsen, K. H. Public trust and mistrust of climate science: A meta-narrative review. Public Underst Sci 31, 832–846, https://doi.org/10.1177/09636625221110028 (2022).
Cologna, V. et al. Trust in climate science and climate scientists: A narrative review. PLOS Clim 3, e0000400, 10.1371/journal.pclm.0000400 (2024).
Horton, R. Offline: Science and the breakdown of trust. Lancet, 945; https://doi.org/10.1016/S0140-6736(20)32064-X (2020).
Dommett, K. & Pearce, W. What do we know about public attitudes towards experts? Reviewing survey data in the United Kingdom and European Union. Public Underst Sci; https://doi.org/10.1177/0963662519852038 (2019).
Algan, Y., Cohen, D., Davoine, E., Foucault, M. & Stantcheva, S. Trust in scientists in times of pandemic: Panel evidence from 12 countries. Proc Natl Acad Sci USA 118; https://doi.org/10.1073/pnas.2108576118 (2021).
Bromme, R., Mede, N. G., Thomm, E., Kremer, B. & Ziegler, R. An anchor in troubled times: Trust in science before and within the COVID-19 pandemic in Germany. PLoS One 17; https://doi.org/10.1371/journal.pone.0262823 (2022).
Jensen, E. A., Jensen, A., Pfleger, A., Kennedy, E. B. & Greenwood, E. Has the pandemic changed public attitudes about science? Available at https://blogs.lse.ac.uk/impactofsocialsciences/2021/03/12/has-the-pandemic-changed-public-attitudes-about-science/ (2021).
Reif, A. & Guenther, L. How representative surveys measure public (dis)trust in science: A systematisation and analysis of survey items and open-ended questions. Journal of Trust Research; https://doi.org/10.1080/21515581.2022.2075373 (2022).
Besley, J. C. & Tiffany, L. A. What are you assessing when you measure “trust” in scientists with a direct measure? Public Underst Sci; https://doi.org/10.1177/09636625231161302 (2023).
Besley, J. C., Lee, N. M. & Pressgrove, G. Reassessing the variables used to measure public perceptions of scientists. Sci Commun 43, 3–32, https://doi.org/10.1177/1075547020949547 (2021).
Mede, N. G., Schäfer, M. S. & Füchslin, T. The SciPop Scale for measuring science-related populist attitudes in surveys. Development, test, and validation. Int J Public Opin Res 33, 273–293, https://doi.org/10.1093/ijpor/edaa026 (2021).
McKeever, R., McKeever, B. W. & Li, J.-Y. Speaking up online: Exploring hostile media perception, health behavior, and other antecedents of communication. Journal Mass Commun Q 94, 812–832, https://doi.org/10.1177/1077699016670121 (2017).
Pratto, F. et al. Social dominance in context and in individuals. Contextual moderation of robust effects of social dominanceorientation in 15 languages and 20 countries. Soc Psychol Personal Sci 4, 587–599, https://doi.org/10.1177/1948550612473663 (2013).
Mede, N. G. Legacy media as inhibitors and drivers of public reservations against science. Global survey evidence on the link between media use and anti-science attitudes. Humanit Soc Sci Commun; https://doi.org/10.1057/s41599-022-01058-y (2022).
McPhetres, J. & Zuckerman, M. Religiosity predicts negative attitudes towards science and lower levels of science literacy. PLoS One 13, e0207125, https://doi.org/10.1371/journal.pone.0207125 (2018).
Chan, E. Are the religious suspicious of science? Investigating religiosity, religious context, and orientations towards science. Public Underst Sci 27, 967–984, https://doi.org/10.1177/0963662518781231 (2018).
Watson, C. Parachute science falls to earth. Nature Index (2021).
Cologna, V. et al. Trust in scientists and their role in society across 68 countries. Nat Hum Behav, https://doi.org/10.1038/s41562-024-02090-5 (2025).
Wilkinson, M. D. et al. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3, 160018, https://doi.org/10.1038/sdata.2016.18 (2016).
Oliveira et al. Towards an inclusive agenda of Open Science for communication research: A Latin American approach. J Commun 71, 785–802, https://doi.org/10.1093/joc/jqab025 (2021).
Azevedo, F. et al. Social and moral psychology of COVID-19 across 69 countries. Scientific Data 10, 272, https://doi.org/10.1038/s41597-023-02080-8 (2023).
Wellcome Trust. Wellcome Global Monitor 2018. How does the world feel about science and health? Available at https://wellcome.org/reports/wellcome-global-monitor/2018 (2019).
Pownall, M. et al. Embedding open and reproducible science into teaching: A bank of lesson plans and resources. Scholarship of Teaching and Learning in Psychology; https://doi.org/10.1037/stl0000307 (2021).
Vazire, S. Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science 13, 411–417, https://doi.org/10.1177/1745691617751884 (2018).
Nosek, B. A., Ebersole, C. R., DeHaven, A. C. & Mellor, D. T. The preregistration revolution. Proc Natl Acad Sci USA 115, 2600–2606, https://doi.org/10.1073/pnas.1708274114 (2018).
Bowman, S. D. et al. OSF Prereg Template (2020).
Lakens, D. Sample size justification. Collabra: Psychology 8; https://doi.org/10.1525/collabra.33267 (2022).
Green, P. & MacLeod, C. J. SIMR: An R package for power analysis of generalized linear mixed models by simulation. Methods Ecol Evol 7, 493–498, https://doi.org/10.1111/2041-210X.12504 (2016).
World Bank. World Bank country and lending groups. Available at https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups (2023).
Berinsky, A. J., Margolis, M. F. & Sances, M. W. Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. Am J Pol Sci 58, 739–753, https://doi.org/10.1111/ajps.12081 (2014).
Cologna, V., Knutti, R., Oreskes, N. & Siegrist, M. Majority of German citizens, US citizens and climate scientists support policy advocacy by climate researchers and expect greater political engagement. Environ. Res. Lett. 16, 24011, https://doi.org/10.1088/1748-9326/abd4ac (2021).
Hendriks, F., Kienhues, D. & Bromme, R. Measuring laypeople’s trust in experts in a digital age: The Muenster Epistemic Trustworthiness Inventory (METI). PLoS One 10, e0139309, https://doi.org/10.1371/journal.pone.0139309 (2015).
Achterberg, P., Koster, Wde & van der Waal, J. A science confidence gap: Education, trust in scientific methods, and trust in scientific institutions in the United States, 2014. Public Underst Sci 26, 704–720, https://doi.org/10.1177/0963662515617367 (2017).
Funk, C., Hefferon, M., Kennedy, B. & Johnson, C. Trust and mistrust in Americans’ views of scientific experts. Available at https://pewrsr.ch/3APd0hN (2019).
Hogg, T. L., Stanley, S. K., O’Brien, L. V., Wilson, M. S. & Watsford, C. R. The Hogg Eco-Anxiety Scale: Development and validation of a multidimensional scale. Glob Environ Change 71, 102391, https://doi.org/10.1016/j.gloenvcha.2021.102391 (2021).
Searle, K. & Gow, K. Do concerns about climate change lead to distress? International Journal of Climate Change Strategies and Management 2, 362–379, https://doi.org/10.1108/17568691011089891 (2010).
Hickman, C. et al. Climate anxiety in children and young people and their beliefs about government responses to climate change: a global survey. The Lancet. Planetary health 5, e863–e873, https://doi.org/10.1016/S2542-5196(21)00278-3 (2021).
Mede, N. G. et al. Perceptions of science, science communication, and climate change attitudes in 68 countries: The TISP dataset [Dataset]. OSF https://doi.org/10.17605/OSF.IO/5C3QD (2024).
Dawson, R. How significant is a boxplot outlier? Journal of Statistics Education 19; https://doi.org/10.1080/10691898.2011.11889610 (2011).
Paul, S. R. & Zhang, X. Testing for normality in linear regression models. Journal of Statistical Computation and Simulation 80, 1101–1113, https://doi.org/10.1080/00949650902964275 (2010).
Lumley, T. Package ‘survey’. Analysis of complex survey samples. R package version 4.4-2. Available at https://cran.r-project.org/web/packages/survey/index.html (2024).
Battaglia, M. P., Hoaglin, D. C. & Frankel, M. R. Practical Considerations in Raking Survey Data. Surv Pract 2, 1–10, https://doi.org/10.29115/SP-2009-0019 (2009).
UN. World Population Prospects 2022. Available at https://population.un.org/wpp/ (2022).
Barro, R. J. & Lee, J. W. A new data set of educational attainment in the world, 1950–2010. Journal of Development Economics 104, 184–198, https://doi.org/10.1016/j.jdeveco.2012.10.001 (2013).
Barro, R. J. & Lee, J. W. Educational Attainment for Total Population, 1950-2015. Dataset v3.0. Available at https://barrolee.github.io/BarroLeeDataSet/BLData/BL_v3_MF1564.xls (2021).
UNECE. Educational attainment by level of education, age, sex, measurement, country and year. Georgia. Available at https://w3.unece.org/PXWeb2015/sq/3290abae-0120-418f-a681-132d4da8f088 (2023).
UIS. SDG Global and Thematic Indicators. Available at https://uis.unesco.org/sites/default/files/documents/bdds/022024/SDG.zip (2024).
Bates, D. et al. lme4: Linear Mixed-Effects Models using ‘Eigen’ and S4. R package version 1.1-35.3. Available at https://cran.r-project.org/web/packages/lme4/ (2024).
Patil, I. et al. Package ‘datawizard’. Easy data wrangling and statistical transformations. R package version 0.10.0. Available at https://cran.r-project.org/web/packages/datawizard/index.html (2024).
Asparouhov, T. General multi-level modeling with sampling weights. Communications in Statistics - Theory and Methods 35, 439–460, https://doi.org/10.1080/03610920500476598 (2006).
Carle, A. C. Fitting multilevel models in complex survey data with design weights: Recommendations. BMC medical research methodology 9, 49, https://doi.org/10.1186/1471-2288-9-49 (2009).
Valliant, R., Dever, J. A. & Kreuter, F. Practical tools for designing and weighting survey samples (Springer, Cham, 2018).
Royal, K. Survey research methods: A guide for creating post-stratification weights to correct for sample bias. Educ Health Prof 2, 48, https://doi.org/10.4103/EHP.EHP_8_19 (2019).
Franco, A., Malhotra, N., Simonovits, G. & Zigerell, L. J. Developing standards for post-hoc weighting in population-based survey experiments. J Exp Polit Sci 4, 161–172, https://doi.org/10.1017/XPS.2017.2 (2017).
de Leeuw, E. D., Hox, J. & Dillman, D. (eds.). International handbook of survey methodology. 1st ed. (Routledge, New York, 2008).
Rosseel, Y. lavaan: An R Package for Structural Equation Modeling. J. Stat. Soft. 48; https://doi.org/10.18637/jss.v048.i02 (2012).
Zhang, C. & Conrad, F. Speeding in web surveys: The tendency to answer very fast and its association with straightlining. Survey Research Methods 8, 127–135, https://doi.org/10.18148/srm/2014.v8i2.5453 (2014).
Marsh, H. W., Liem, G. A. D., Martin, A. J., Morin, A. J. S. & Nagengast, B. Methodological measurement fruitfulness of exploratory structural equation modeling (ESEM): New approaches to key substantive issues in motivation and engagement. J Psychoeduc Assess 29, 322–346, https://doi.org/10.1177/0734282911406657 (2011).
Zimmermann, M. & Jucks, R. With a view to the side: YouTube’s sidebar and YouTuber’s linguistic style as hints for trust-related evaluations. International Journal of Human–Computer Interaction 35, 1279–1291, https://doi.org/10.1080/10447318.2018.1519165 (2019).
Altenmüller, M. S., Lange, L. L. & Gollwitzer, M. When research is me-search: How researchers’ motivation to pursue a topic affects laypeople’s trust in science. PLoS One 16, e0253911, https://doi.org/10.1371/journal.pone.0253911 (2021).
Tavakol, M. & Dennick, R. Making sense of Cronbach’s alpha. International journal of medical education 2, 53–55, https://doi.org/10.5116/ijme.4dfb.8dfd (2011).
Watkins, M. W. Exploratory factor analysis: A guide to best practice. Journal of Black Psychology 44, 219–246, https://doi.org/10.1177/0095798418771807 (2018).
Gauchat, G. The cultural authority of science. Public trust and acceptance of organized science. Public Underst Sci 20, 751–770, https://doi.org/10.1177/0963662510365246 (2011).
Benson-Greenwald, T. M., Trujillo, A., White, A. D. & Diekman, A. B. Science for others or the self? Presumed motives for science shape public trust in science. Pers Soc Psychol Bull 49, 344–360, https://doi.org/10.1177/01461672211064456 (2023).
Bundi, P. & Pattyn, V. Trust, but verify? Understanding citizen attitudes toward evidence‐informed policy making. Public Administration 101, 1227–1246, https://doi.org/10.1111/padm.12852 (2023).
Mede, N. G., Schäfer, M. S., Metag, J. & Klinger, K. Who supports science-related populism? A nationally representative survey on the prevalence and explanatory factors of populist attitudes toward science in Switzerland. PLoS One 17; https://doi.org/10.1371/journal.pone.0271204 (2022).
Wuttke, A., Schimpf, C. & Schoen, H. When the whole is greater than the sum of its parts. On the conceptualization and measurement of populist attitudes and other multidimensional constructs. Am Polit Sci Rev 114, 356–374, https://doi.org/10.1017/S0003055419000807 (2020).
Mede, N. G., Rauchfleisch, A., Metag, J. & Schäfer, M. S. The interplay of knowledge overestimation, social media use, and populist ideas: Cross-sectional and experimental evidence from Germany and Taiwan. Communic Res; https://doi.org/10.1177/00936502241230203 (2024).
Erisen, C. et al. Psychological correlates of populist attitudes. Polit Psychol 42, 149–171, https://doi.org/10.1111/pops.12768 (2021).
Stier, S., Kirkizh, N., Froio, C. & Schroeder, R. Populist attitudes and selective exposure to online news. A cross-country analysis combining web tracking and surveys. Int J Press Polit 25, 426–446, https://doi.org/10.1177/1940161220907018 (2020).
Mede, N. G. Variations of science-related populism in comparative perspective: A multilevel segmentation analysis of supporters and opponents of populist demands toward science. International Journal of Comparative Sociology https://doi.org/10.1177/00207152231200188 (2023).
Reif, A., Taddicken, M., Guenther, L., Schröder, J. T. & Weingart, P. The Public Trust in Science Scale (PuTS): A multilevel and multidimensional approach, https://osf.io/preprints/osf/bp8s6 (2024).
Remsö, A. & Renström, E. A. Ideological predictors of anti-science attitudes: exploring the impact of group-based dominance and populism in North America and Western Europe. Front. Soc. Psychol. 1; https://doi.org/10.3389/frsps.2023.1303157 (2023).
Nekmat, E. & Gonzenbach, W. J. Multiple opinion climates in online forums. Role of website source reference and within-forum opinion congruency. Journal Mass Commun Q 90, 736–756, https://doi.org/10.1177/1077699013503162 (2013).
Hu, L. & Bentler, P. M. Cutoff criteria for fit indexes in covariance structure analysis. Conventional criteria versus new alternatives. Struct Equ Modeling 6, 1–55, https://doi.org/10.1080/10705519909540118 (1999).
Liu, X. & Fahmy, S. Exploring the spiral of silence in the virtual world. Individuals’ willingness to express personal opinions in online versus offline settings. Journal of Media and Communication Studies 3, 45–57, https://doi.org/10.5897/JMCS.9000031 (2011).
Aichholzer, J. & Lechner, C. M. Refining the Short Social Dominance Orientation scale (SSDO): A validation in seven European countries. J. Soc. Polit. Psych. 9, 475–489, https://doi.org/10.5964/jspp.6919 (2021).
Kaiser, H. F. The varimax criterion for analytic rotation in factor analysis. Psychometrika 23, 187–200, https://doi.org/10.1007/bf02289233 (1958).
Dueber, D. M. et al. To reverse item orientation or not to reverse item orientation, that is the question. Assessment 29, 1422–1440, https://doi.org/10.1177/10731911211017635 (2022).
Chen, F., Bollen, K. A., Paxton, P., Curran, P. J. & Kirby, J. B. Improper solutions in structural equation models. Sociological Methods & Research 29, 468–508, https://doi.org/10.1177/0049124101029004003 (2001).
Wilson, M. S. & Sibley, C. G. Social dominance orientation and right‐wing authoritarianism: Additive and interactive effects on political conservatism. Polit Psychol 34, 277–284, https://doi.org/10.1111/j.1467-9221.2012.00929.x (2013).
Pratto, F., Stallworth, L. M. & Sidanius, J. The gender gap: differences in political attitudes and social dominance orientation. British Journal of Social Psychology 36(Pt 1), 49–68, https://doi.org/10.1111/j.2044-8309.1997.tb01118.x (1997).
Azevedo, F., Jost, J. T., Rothmund, T. & Sterling, J. Neoliberal ideology and the justification of inequality in capitalist societies: Why social and economic dimensions of ideology are intertwined. Journal of Social Issues 75, 49–88, https://doi.org/10.1111/josi.12310 (2019).
2023). RStudio. Shiny. Easy web applications in R. Available at rstudio.com/products/shiny.
Acknowledgements
We warmly thank Damiano Lombardi (University of Zurich) for managing the author list and author contributions.
Author information
Authors and Affiliations
Contributions
Conceptualisation: N.M., V.C., S.B., J.B., C.B., M.J., E.W.M., S.M., N.O., M.S., S.V.D.L. Data curation: N.M. Formal analysis: N.M. Methodology: N.M., V.C., S.B., J.B., C.B., M.J., E.W.M., S.M., N.O., M.S., S.V.D.L. Project admin.: V.C. Software: N.M., V.C. Supervision: N.M., V.C. Validation: N.M., V.C. Visualisation: N.M., V.C. Investigation: N.M., V.C., N.I.A.A., S.A., N.A.S., B.A., I.A., E.A., A.A., M.A., I.M.A., M.A., R.M.A., R.A., T.A., D.A., F.A., A.B., R.B., K.B., E.B., A.Y.B., O.B., K.B., A.B., O.B., P.C.A., F.C., A.C.V., T.C., R.K.C., S.C., G.C., S.D.P., R.D., S.D., C.D.C., L.D.S., K.D., S.D., K.M.D., C.D., D.D., M.D., U.K.H.E., T.W.E., M.F., A.F.B., Z.F., X.F., C.F., C.F., M.F., S.F., H.F., J.F., M.F., S.F., A.G., P.G.V., M.G.V., W.G., O.G., O.G., T.G., J.G., E.G., M.G., C.G.B., H.G., D.G., G.M.G., L.G., H.H., L.N.H., P.H., A.C.H.M., A.H., G.H., M.H., M.H., N.I., M.I., M.T.I., Y.J., T.J., C.A.J., S.J., D.J., M.Z.K., J.K., S.K., J.R.K., M.K., T.K.R., O.K., H.K., A.K., L.K., E.K., L.K., A.K., L.S.K., P.K., S.K., T.K., A.K., E.A.K., C.L., A.L., A.L., J.B.L., Z.L., N.L., A.L., G.L., A.L., A.L.O., C.L.V., N.M.L., C.H.L., K.L.T., M.D.M., S.M., H.M., J.M., T.L.M., J.M., P.M., F.M.R., M.M., I.M., Z.M., J.N., E.J.N., J.P.N., K.K.N.C., N.N.V.N., D.N., T.O., J.P.H., M.P., P.P., P.P.S., M.P.C., M.P., Y.P., A.R.P., M.P., K.P., M.P., J.P., D.P., A.P., K.P., E.P., K.P., D.M.Q., P.R., A.R., F.G.R., C.R.S., G.R., J.R., S.R., J.P.R., R.M.R., I.R., O.S., R.R.S., P.S., B.S., A.S., J.S.N., E.S., N.S., L.S., B.S., O.S., S.K.S., G.S., S.S., B.S., E.S., M.T., C.T.E., C.T.E., B.T., A.K.T., R.T., D.T.F., M.T., O.M.U., I.C.U., J.V.N., C.V., S.V., I.V., A.V.B., I.W., I.W., M.W., T.W., M.W., F.W., A.D.W., Z.X., J.X., E.Z.P., A.Z., R.A.Z. Resources: N.I.A.A., S.A., N.A.S., B.A., I.A., E.A., A.A., M.A., I.M.A., M.A., M.A., R.M.A., R.A., T.A., D.A., F.A., A.B., R.B., K.B., E.B., A.Y.B., O.B., K.B., A.B., O.B., P.C.A., F.C., A.C.V., T.C., R.K.C., S.C., G.C., S.D.P., R.D., S.D., C.D.C., L.D.S., K.D., S.D., K.M.D., C.D., D.D., M.D., U.K.H.E., T.W.E., M.F., A.F.B., Z.F., X.F., C.F., C.F., M.F., S.F., H.F., J.F., M.F., S.F., A.G., P.G.V., M.G.V., W.G., O.G., O.G., T.G., J.G., E.G., M.G., C.G.B., H.G., D.G., G.M.G., L.G., H.H., L.N.H., P.H., A.C.H.M., A.H., G.H., M.H., M.H., N.I., M.I., M.T.I., Y.J., T.J., C.A.J., S.J., D.J., M.Z.K., J.K., S.K., J.R.K., M.K., T.K.R., O.K., H.K., A.K., L.K., E.K., L.K., A.K., L.S.K., P.K., S.K., T.K., A.K., E.A.K., C.L., A.L., A.L., J.B.L., Z.L., N.L., A.L., G.L., A.L., A.L.O., C.L.V., N.M.L., C.H.L., K.L.T., M.D.M., S.M., H.M., J.M., T.L.M., J.M., P.M., F.M.R., M.M., I.M., Z.M., J.N., E.J.N., J.P.N., K.K.N.C., N.N.V.N., D.N., T.O., J.P.H., M.P., P.P., P.P.S., M.P.C., M.P., Y.P., A.R.P., M.P., K.P., M.P., J.P., D.P., A.P., K.P., E.P., K.P., D.M.Q., P.R., A.R., F.G.R., C.R.S., G.R., J.R., S.R., J.P.R., R.M.R., I.R., O.S., R.R.S., P.S., B.S., A.S., J.S.N., E.S., N.S., L.S., B.S., O.S., S.K.S., N.S., S.S., B.S., E.S., M.T., C.T.E., C.T.E., B.T., A.K.T., R.T., D.T.F., M.T., O.M.U., I.C.U., J.V.N., C.V., S.V., I.V., A.V.B., I.W., I.W., M.W., T.W., M.W., F.W., A.D.W., Z.X., J.X., E.Z.P., A.Z., R.A.Z. Funding acquisition: V.C., N.O., M.S., J.B., E.M., S.B., C.B., B.A., I.A., E.A., M.A., M.A., R.M.A., D.A., A.B., R.B., E.B., C.B., A.Y.B., R.B., O.B., M.B., A.B., K.B., A.B., O.B., P.C.A., F.C., A.C.V., T.C., S.C., G.C., R.D., S.D., C.D.C., L.D.S., K.D., S.D., K.M.D., C.D., D.D., M.D., U.K.H.E., M.E., B.E., T.W.E., M.F., A.F.B., X.F., C.F., C.F., M.F., S.F., H.F., J.F., M.F., S.F., A.G., M.G.V., W.G., O.G., O.G., T.G., J.G., E.G., M.G., C.G.B., H.G., D.G., G.M.G., L.G., H.H., L.N.H., P.H., A.C.H.M., A.H., G.H., M.H., M.H., N.I., M.I., C.A.J., S.J., D.J., M.Z.K., J.K., S.K., J.R.K., T.K.R., O.K., H.K., T.K., A.K., L.K., E.K., L.K., L.K., A.K., J.K., L.S.K., P.K., S.K., A.K., E.A.K., C.L., A.L., A.L., J.B.L., Z.L., N.L., A.L., G.L., A.L., A.L.O., C.L.V., N.M.L., C.H.L., K.L.T., M.D.M., S.M., R.M., H.M., J.M., T.L.M., J.M., P.M., F.M.R., M.M., E.J.N., J.P.N., T.O., T.O., J.P.H., M.P., P.P., P.P.S., M.P.C., M.P., Y.P., A.R.P., M.P., C.R.P., K.P., J.P., D.P., A.P., E.P., K.P., P.R., A.R., F.G.R., C.R.S., J.P.R., J.R., S.R., J.P.R., R.M.R., I.R., O.S., R.R.S., P.S., S.S., B.S., A.S., J.S.N., E.S., J.S., N.S., L.S., B.S., O.S., S.K.S., G.S., S.S., E.S., M.T., C.T.E., C.T.E., B.T., R.T., D.T.F., M.T., M.T., O.M.U., I.C.U., J.V.N., C.V., S.V., I.V., A.V.B., I.W., I.W., M.W., T.W., M.W., F.W., A.D.W., Z.X., J.X., E.Z.P., A.Z., R.A.Z. Writing-orig. draft: N.M. Writing – review & editing: N.M., V.C., S.B., J.B., C.B., M.J., E.W.M., S.M., N.O., M.S., S.V.D.L.
Corresponding author
Ethics declarations
Competing interests
Author André Krouwel has ownership and stocks in Kieskompas BV, the market research company that collected data in Romania and Uruguay. No payment was received by the author. No other authors reported a competing interest.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mede, N.G., Cologna, V., Berger, S. et al. Perceptions of science, science communication, and climate change attitudes in 68 countries – the TISP dataset. Sci Data 12, 114 (2025). https://doi.org/10.1038/s41597-024-04100-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-024-04100-7
This article is cited by
-
Trust in scientists and their role in society across 68 countries
Nature Human Behaviour (2025)
-
Extreme weather event attribution predicts climate policy support across the world
Nature Climate Change (2025)