Abstract
Scientific journals often rely on informal methods to evaluate reviewers, such as editor ratings and author feedback. Reviewer self-assessment offers a promising, yet underexplored, approach to improving the peer-review process. This study examined the factors associated with reviewers’ self-assessments. We surveyed 642 reviewers and editors from three Information Systems (IS) conferences (January–February 2020), and 144 responses were analyzed using quantitative inferential statistics. Most respondents were male (72.2%) and based in Europe (59%). We found no significant association between self-assessment and conventional experience markers (reviewing and publishing experiences). In contrast, significant associations were observed between higher self-assessment and the perceived importance of feedback from editors (χ2 = 19.689, p ≈ 0.002), feedback from authors (χ2 = 25.168, p < 0.001), and formal training (χ2 = 14.64, p ≈ 0.047). Although our sample comes from IS settings, these mechanisms are process-based; therefore, these findings could be extended to a broader peer review ecosystem. Overall, organizational interventions, structured feedback from editors and authors, and formal training are more closely related to reviewers’ self-assessments than accumulated publishing or reviewing experience.
Introduction
The peer review process is widely regarded as the gold standard for evaluating manuscripts before publication1. The current system relies on reviewers to assess research quality and inform editorial decisions. Although review quality is strongly linked to the expertise of reviewers2, there has been limited progress in identifying high-quality reviewers and defining clear minimum standards for the knowledge, skills, and characteristics required3.
A major weakness of peer review is the variability among reviewers, which often results in inconsistent evaluations4. This inconsistency is partly explained by the absence of structured systems for selecting, training, and evaluating reviewers based on consistent criteria. Instead, many journals rely on informal and unevenly applied practices, whereas performance is usually judged retrospectively based on submitted reports. Efforts to develop accurate methodologies for measuring reviewer skills have been limited5,6.
One of the most common approaches for assessing reviewers is the evaluation of their review reports. At least 24 tools have been developed for this purpose7. In contrast, author satisfaction is often tied to whether a manuscript is accepted rather than to the quality of the review itself, making it an unreliable indicator for improving the process8. Another widely discussed method involves testing reviewers with fictitious manuscripts containing deliberate errors designed to expose strengths and weaknesses in their evaluations5,9. A less-explored approach is self-assessment, in which reviewers provide structured feedback on their own reports. This method offers several potential benefits for editors and publishers, including more reliable quality indicators, improved reviewer–manuscript matching, and a more transparent basis for reviewer development. Despite these advantages, self-assessment has received significantly less attention than the other evaluation strategies.
Self-assessment also demonstrated positive outcomes in other domains. A review of more than 20 education studies found that students who engaged in self-grading performed better on subsequent exams than those who did not10. Similarly, organizational management frameworks that incorporate self-assessment have been shown to improve company outcomes11. These findings suggest that when adapted to peer review, self-assessment could help reduce biases, increase the reliability of evaluations, and foster fairer decision-making. The absence of such practices, in contrast, limits accountability, reduces learning opportunities for reviewers, and risks perpetuating inefficiency.
In this context, reviewer self-assessment has emerged as a promising yet underexplored approach to understanding and improving peer-review practices. It can provide valuable insights into reviewers’ confidence, alignment of their expertise with assigned manuscripts, and their perceived preparedness. Such information can enhance reviewer–manuscript matching, inform targeted training initiatives, and support editorial decision making.
This study investigates the factors associated with reviewers’ self-evaluation practices. In particular, it examines the role of prior experience in publishing and reviewing, as well as the use of resources that support the review process. By analyzing these factors, this study provides empirical evidence on whether traditional indicators of experience, such as publication and review counts, correspond to how reviewers assess their own performance or whether organizational support mechanisms, including editor and author feedback or formal training, exert a greater influence. The analysis was based on the perspectives of editors and reviewers from the World Conference on Information Systems and Technologies (WorldCIST), International Conference on Information Technology and Systems (ICIST), and Iberian Conference on Information Systems and Technologies (CISTI). Although peer review in Information Systems (IS) is not expected to differ significantly from other disciplines, the interdisciplinary nature of IS makes it essential to examine current practices in this field12.
While this introduction outlines the challenges of peer review and the potential of self-assessment, the following section provides background on existing application of self-assessment. The methodology section describes the research design, followed by the survey results. The discussion highlights the implications of the findings, and the conclusion summarizes the key contributions, limitations, and directions for future research.
Background
Self-evaluation reflects an individual’s perception of their abilities, judgments, and outcomes, enabling them to anticipate and regulate future performance13. It also encompasses confidence in the completion of specific tasks14. Research shows that individuals with strong self-evaluation skills perform better in work-related activities, are more willing to take calculated risks in decision-making, and draw on positive internal resources to face challenges15. They also demonstrate greater confidence in their abilities and a stronger sense of control over situations16.
Peer perceptions of factors that strengthen self-assessment can inform strategies to improve review quality, although few influences have been shown to consistently enhance it. Evidence links gains in critical thinking and self-evaluation skills to the practice of peer review17, as review, like any complex skill, improves with practice18. Self-assessment further builds reviewers’ confidence in delivering rigorous evaluations. A systematic review confirmed that self- and peer-assessment positively influence learning, with accuracy shaped by self-confidence, emotional intelligence, prior experience, feedback quality, and appropriate assessment tools19. Clear criteria, constructive feedback, and adequate training are essential for ensuring reliable and objective assessments. More recent findings emphasize that self-assessment accuracy depends not only on individual ability but also on contextual factors such as the quality of feedback. For instance, studies have shown that structured input from large language models (LLMs) can significantly improve self-assessment precision, although the effects vary across individuals20. Similarly, exposure to high-quality peer work enables students to detect more problems, provide more specific feedback, generate more accurate self-scores, and achieve greater performance improvements21.
However, the literature remains inconclusive regarding which attitudes, experiences, or academic qualifications are consistently associated with high-quality reviews. For example, a biomedical survey found that most participants believed “good” authors make “good” reviewers, yet empirical evidence supporting this assumption is scarce. Overall, the connection between authorship experience, academic qualifications, and production of high-quality reviews remains weak and largely unsupported by data3.
Recognizing self-assessment as a key indicator of process improvement highlights the need to strengthen the reviewers’ ability to evaluate their own performance. Since engaging in peer review has been shown to enhance critical thinking and self-evaluation skills, it is essential to identify the factors that enable accurate and insightful self-assessment22, thereby fostering higher-quality and more consistent reviews. Reviewers who critically examine their judgments, recognize biases, and address areas for growth enhance both the integrity of the review process and a culture of continuous improvement. By uncovering indicators that support strong self-assessment, scholarly communities can create conditions for more rigorous evaluation, targeted development, and meaningful contributions to research quality. This study examines how reviewers’ backgrounds, attitudes, experiences, and beliefs influence self-assessment, with a focus on whether greater experience improves evaluation accuracy, and whether resources such as training or checklists further enhance reviewers’ ability to critically assess their performance.
Methodology
Aim of the research
This study is a survey designed to explore how editors’ and reviewers’ experiences, attitudes, and beliefs are associated with their self-assessments as reviewers. Associations were identified using an online questionnaire and analyzed using statistical methods. The study follows the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) to ensure rigor in the reporting of this survey23.
Instrument
The survey questions were developed and conducted in accordance with established guidelines24,25. The survey consisted of seven groups of closed questions, rated on a 1 to 5-point scale, covering demographic details, publishing experience, reviewing experience, frequency of accessing peer-review resources, beliefs about peer review, and the perceived importance of available peer-review resources, as outlined in Table 1.
The categories of demographic characteristics (e.g., gender, age group, professional position, and research area) were defined by the authors in line with common practice in survey-based research, to capture information most relevant to the study objectives. An online questionnaire was used to gather data from the target population. Prior to the main survey, a pilot test was conducted with participants from the Artificial Intelligence and Computer Science Laboratory of the University of Porto, who had experience in scientific peer review. Five responses were received. The pilot participants provided feedback on the quality of the questions, time required to complete the survey, and overall ease of completion. On the basis of their feedback, the questionnaire was revised and converted into an electronic format using Google Forms.
The invitation e-mail explained the purpose of the study, identified the researchers responsible, and indicated the estimated completion time. It also included a link to the questionnaire, which presented the informed consent form to which participants had to agree before gaining access. The questionnaire was available for 3 weeks.
Sampling and data collection procedure
The target audience for this study comprised editors and reviewers with experience in the scientific peer-review process within the Information Systems or Computer Science fields. To reach this group, we contacted participants through three conferences organized by the Association for Information Systems and Technologies (AISTI), a scientific association dedicated to promoting and disseminating knowledge in these areas across academia, organizations, and society. We used purposive sampling, a method that deliberately selects participants who can reasonably represent the population with desired characteristics26. Although purposive sampling may introduce bias and affect external validity, it is appropriate when random sampling is not feasible and, in such cases, can strengthen the internal validity of the findings The inclusion criteria were being an editor and/or reviewer with peer-review experience in Information Systems or Computer Science, receiving an e-mail invitation via the WorldCIST, CISTI, or ICIST lists, and providing informed consent. The exclusion criteria were lack of consent, lack of relevant experience, or incomplete essential responses. Only respondents who met these criteria were included in the analyses.
The final questionnaire was distributed via e-mail to 642 reviewers and editors from WorldCIST, CISTI, and ICIST. The survey was conducted between January 25 and February 15, 2020. We received 144 responses from 642 e-mails, yielding a response rate of 22.4%. This aligns with Lund (2023), who reports that such rates are acceptable in Information Systems research; accordingly, a response rate of this magnitude is considered valid for questionnaire-based studies on IS, supporting the significance and reliability of the findings27. For sample size estimation, we selected a finite population (N = 642) and applied a conservative proportion (p = 0.50), 95% confidence level, and target margin of error of ± 8 percentage points, which indicated a required sample of 120. With 144 valid responses, the sample achieved provided a worst-case margin of error of ± 8.0 percentage points at the 95% confidence level.
Ethics
Informed consent: All participants were informed about the study and signed the informed consent form. The invitation e-mail included a link to the questionnaire, which presented the informed consent form to which participants had to agree before gaining access. Participants could access the questionnaire only after agreeing to the consent terms through a yes/no question. Only participants who provided affirmative consent were included in this study. They were informed that participation was anonymous, voluntary, and confidential and that data would be used exclusively for research purposes. No personal or sensitive information beyond basic demographics was collected. All data were anonymized and processed in compliance with the General Data Protection Regulation (GDPR, Regulation (EU) 2016/679) and the Data Protection Act of 2018. At the time of the study, formal ethics approval was not required by our institution’s guidelines for low-risk research involving professionals.
Quantitative analyses
The data were analyzed by the authors. Quality assurance was ensured through automated data collection systems, which minimized entry errors and ensured completeness and consistency of the dataset prior to analysis. Data are presented as both absolute counts and percentages. Chi-square and Fisher’s exact tests were used as exploratory tools to identify potential associations with the main variables of interest. Associations were analyzed using chi-square tests when assumptions were met and Fisher’s exact test when they were not. We reported p < 0.05 as statistically significant and p < 0.10 as weakly significant to highlight variables of potential interest for future analyses with a larger sample. Given the relatively small sample size and exploratory nature of the study, no corrections for multiple testing (e.g., Bonferroni) were applied, as these would have been overly conservative. All analyses were conducted using R version 3.6.2. The detailed calculations for each variable are provided in Supplementary Material 1.
Results
Table 2 presents the respondents’ demographic characteristics. All 144 participants held roles as both reviewers and authors in the scientific peer-review process, with 51 serving as editors (multiple roles were allowed). The majority of respondents were male (72.2%), predominantly from Europe (59%), and between the ages of 40 and 49 (36.8%). A significant proportion (72.2%) came from the fields of Computer Science & Information Technology. The full demographic details of the participants who completed the questionnaire are available in Supplementary Table 1.
The following subsections present the associations between respondents’ self-assessments as reviewers and the questionnaire variables. The internal validity of the questionnaire, assessed with Cronbach’s alpha, yielded the following results: experience (with Question 4 excluded), α = 0.630, indicating borderline acceptable consistency; belief, α = 0.737, indicating adequate consistency; and importance of resources, α = 0.829, indicating good consistency.
Association between respondents’ peer review experience and their self-assessment as reviewers
Publishing experience: Most respondents 34% (48/144) had between 0 and 5 publications in scientific journals, followed by 30% (42/144) had published between 16 and 30 conference articles, 52% (65/144) had authored 1 to 5 books, and 58% (76/144) had contributed to 1 to 5 book chapters. No significant associations were found between self-evaluation as a reviewer and number of journal articles (χ2 = 6.13, p = 0.633), conference articles (χ2 = 6.684, p = 0.573), books (χ2 = 5.452, p = 0.24), or book chapters (χ2 = 4.729, p = 0.581). The calculation results are shown in Supplementary Table 2.
Reviewing experience: Among respondents with reviewing experience, 26.1% (36/144) had reviewed 6 to 15 journal articles, while 28% (40/144) reviewed 16 to 30 conference papers. A total of 38.8% (47/144) reviewed 1 to 5 book chapters. Notably, 62.2% (74/144) had never reviewed a book, and 41% (48/144) had no experience with grant reviews.
The only significant association found was between self-evaluation as a reviewer and the number of book reviews, which was weakly significant (ϰ2 = 7.951, p ≈ 0.073). No statistically significant associations were observed between self-evaluation and the number of reviews for journal articles (ϰ2 = 10.454, p ≈ 0.364), conference papers (ϰ2 = 6.044, p ≈ 0.648), book chapters (ϰ2 = 5.368, p ≈ 0.496), or grant reviews (ϰ2 = 4.065, p ≈ 0.68). The calculation results are shown in Supplementary Table 3.
Association between frequency of accessing available peer review resources and self-assessment as a reviewer
The respondents’ experience with resources used in peer review revealed that 34% (48/144) sometimes used guidelines during their reviews, 32% (46/144) frequently used checklists, and 29% (41/144) regularly referred to publisher-provided guides. Additionally, 40% (56/144) rarely received articles outside their area of expertise, 63% (90/144) had never been invited to participate in review training, and 40% (57/144) had never received feedback from the editor on the quality of their review.
A significant association was found between receiving articles outside one’s area of expertise and self-evaluation as a reviewer (ϰ2 = 16.92, p ≈ 0.016). There was also a weak association between receiving guides from the publisher to aid in reviewing and self-evaluation as a reviewer (ϰ2 = 13.70, p ≈ 0.07). The calculation results are shown in Supplementary Table 4.
Association between beliefs about peer review and self-assessment as a reviewer
The results regarding beliefs about peer review show that 39.3% (55/144) of respondents somewhat agreed that higher-impact journals produce better review reports. Additionally, 45.4% (64/144) strongly agreed that the quality of the review depended on the reviewer’s competence, and 48.2% (67/144) believed that journal reviews were more rigorous than conference reviews. Furthermore, 33.9% (40/144) somewhat agreed that they were satisfied with the quality of the review reports they received as authors.
A significant association was found between self-assessment and the belief that higher-impact journals provide better review reports (ϰ2 = 16.665, p ≈ 0.022). The calculation results are shown in Supplementary Table 5.
Association between the perceived importance of available peer review resources and self-assessment as a reviewer
Respondents’ views on the importance of available resources in peer review revealed that the majority consider the following resources to be very important: guidelines 46.4% (65/144), checklists 49.6% (70/144), tools 47.5% (67/144), editor feedback 48.9% (69/144), author feedback 44% (62/144), formal training 34.5% (48/144), and specific review templates 47.9% (67/144).
Significant associations were found between self-evaluation and perceived importance of editor feedback (ϰ2 = 19.689, p ≈ 0.002), author feedback (ϰ2 = 25.168, p < 0.001), and formal training (ϰ2 = 14.64, p ≈ 0.047). A weak association was also identified between self-evaluation and checklist use (ϰ2 = 12.178, p ≈ 0.01). The calculation results are shown in Supplementary Table 6.
In summary, self-assessment as a reviewer is strongly associated with formal training and author and editor feedback. See Table 3.
Discussion
This study examines the factors that shape how reviewers assess their performance in the peer review process. By analyzing their experiences, attitudes, and beliefs, this study seeks to identify patterns and associations that can inform more effective practices and ultimately enhance the overall quality of peer review.
While the literature suggests that greater experience with publications helps reviewers perform better28, our findings show that experience in publishing or reviewing scientific journals or conferences does not significantly contribute to a reviewer’s confidence, even among editors. This aligns with a previous study that reported no significant association between reviewers’ ability to assess manuscripts and their research performance. When present, this association was weak and not statistically significant29. This discrepancy leads to a reassessment of the criteria used to assess reviewer competence, emphasizing the need to prioritize more social and subjective aspects. Our study observed a significant association between reviewers’ experience in reviewing books and their self-evaluation tendencies. This suggests that experience with certain types of reviews may contribute to reviewers’ confidence and ability to self-evaluate.
Additionally, this finding contrasts with the results in educational contexts, where review experience increased student participants’ confidence. In a problem-based learning study, peer assessment significantly boosted students’ confidence30. Additionally, students reported that peer assessment enhanced self-confidence, improved learning practices, and helped them identify personal strengths and weaknesses17. It remains unclear whether this sense of insecurity is specific to reviewers in the IS discipline. A study in the nursing field found that most reviewers felt confident after completing 1 to 5 reviews31. The persistent insecurity among reviewers, even those with extensive experience, may be due to lack of review guides, training, or other forms of support.
A significant association was found between the belief that higher-impact journals produce better reviews. This finding supports previous findings that reviews differ significantly between journals based on their impact factors32. An earlier study showed that higher-impact journals tend to focus on reviews that cover broader aspects such as statistical analyses, ethical concerns, and conflicts of interest, while placing less emphasis on specific manuscript components. In contrast, lower-impact journals request detailed feedback on particular sections more frequently, such as abstracts, introductions, methods, and results33.
There was a significant association between the importance of using additional resources, such as receiving feedback from editors and authors, and formal training. A weaker association was observed with the use of checklists. The need for training, guidelines, and feedback has been recognized across various disciplines and contexts31,34,35,36. Although some studies have suggested that training programs do not significantly improve the quality of subsequent reviews, more research is needed to fully understand their impact.
In this survey, all participants considered the use of additional resources, such as formal training, guidelines, checklists, and feedback from authors and editors, to be very important. However, most participants reported they had never been invited to formal training or received feedback from an editor. This aligns with findings from the nursing field, where the majority of reviewers also lacked training and editor feedback31.
Another key finding was that most participants never or seldom received manuscripts outside their expertise. Furthermore, the association between the frequency of receiving manuscripts outside one’s area of expertise and self-assessment highlights the importance of matching reviewers with suitable manuscripts. A previous study showed that conference paper reviewers were most concerned that the paper fell within their expertise area37, and a manuscript outside the reviewer’s area is one of the main reasons for declining a review38. Strategies to mitigate this challenge may include improving reviewer databases and implementing robust matching algorithms based on expertise and areas of interest.
From a practical perspective, our study highlights the importance of providing reviewers with adequate support and resources, such as training programs, feedback mechanisms, and structured tools, to improve the quality and efficiency of peer reviews. For publishers, adopting self-assessment as a complementary quality indicator requires further research to identify the factors that strengthen self-evaluation, enabling it to serve as an additional parameter alongside existing assessment methods.
The findings of this study have practical implications for both publishers and editors. Encouraging reviewers to engage in self-assessment can provide additional indicators of quality, support more effective review–manuscript matching and inform targeted training initiatives. These practices can help publishers identify areas where reviewers may need support, thereby improving the consistency and accountability of the review process. In the wider context of open research practices, integrating self-assessment and training aligns with efforts to increase transparency, foster reviewer development, and strengthen trust in peer reviews as a core element of scholarly communication. Positioning self-assessment within this broader movement highlights its potential not only as a tool for individual reflection but also as a mechanism for advancing fairness and integrity in research evaluations.
This study had several limitations that should be acknowledged. First, the survey sample was restricted to participants from three conferences in the field of Information Systems, which may limit the generalizability of the findings to other disciplines. Second, the data relied on self-reported perceptions from reviewers and editors, which may be subject to response bias and over- or underestimation of actual practice. Third, the survey was conducted within a defined timeframe, and attitudes or practices may have evolved since then, particularly in light of ongoing developments in peer review and open research practices. Finally, while the study identifies associations between self-assessment and certain reviewer characteristics, causal relationships have not been established. These limitations should be considered when interpreting the results and highlighting the opportunities for future research.
Conclusion
This study advances our understanding of peer review by examining self-assessment as a complementary mechanism for evaluating reviewer quality. These findings are particularly relevant in the context of increasing demands for transparency and accountability in scholarly publishing, where publishers and editors seek more reliable ways to identify and support competent reviewers. By analyzing the perspectives of reviewers and editors, this study provides empirical evidence on how experience, resources, and self-perception intersect, offering greater clarity about the potential role of self-assessment in reviewer development. The results highlight why self-assessment matters: it can promote accountability, inform training initiatives, and strengthen reviewer–manuscript matching, thereby enhancing the efficiency and fairness of the peer review process. Future research should extend these findings across disciplines, test the longitudinal effects of self-assessment, and explore how combining self-assessment with editor and author feedback can further enhance the quality and integrity of peer reviews.
Data availability
The data presented in this study are available on request from the corresponding author.
References
Herron, D. M. Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surg. Endosc. 26, 2275–2280 (2012).
Gasparyan, A. Y. & Kitas, G. D. Best peer reviewers and the quality of peer review in biomedical journals. Croat. Med. J. 53, 386–389 (2012).
Glonti, K., Boutron, I., Moher, D. & Hren, D. Journal editors’ perspectives on the roles and tasks of peer reviewers in biomedical journals: a qualitative study. BMJ Open 9, 1–10 (2019).
Mavrogenis, A. F., Sun, J., Quaile, A. & Scarlat, M. M. How to evaluate reviewers – the international orthopedics reviewers score (INOR-RS). Int. Orthop. 43, 1773–1777. https://doi.org/10.1007/s00264-019-04374-2 (2019).
Baxt, W. G., Waeckerle, J. F., Berlin, J. A. & Callaham, M. L. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann. Emerg. Med. 32, 310–317 (1998).
Drvenica, I., Bravo, G., Vejmelka, L., Dekanski, A. & Nedić, O. Peer review of reviewers: The author’s perspective. Publications 7, 1–10 (2018).
Superchi, C. et al. Tools used to assess the quality of peer review reports: A methodological systematic review. BMC Med. Res. Methodol. 19, 48. https://doi.org/10.1186/s12874-019-0688-x (2019).
Weber, E. J., Katz, P. P., Waeckerle, J. F. & Callaham, M. L. Author perception of peer review. JAMA 287, 2790 (2002).
Schroter, S. et al. What errors do peer reviewers detect, and does training improve their ability to detect them?. J. R. Soc. Med. 101, 507–514 (2008).
Andrade, H. L. A critical review of research on student self-assessment. Front. Educ. 4, 1–13 (2019).
Zink, K. J. & Schmidt, A. Practice and implementation of self-assessment. Int. J. Qual. Sci. 3, 147–170 (1998).
Robey, D. Research commentary: diversity in information systems research: Threat, promise, and responsibility. Inf. Syst. Res. 7, 400–408. https://doi.org/10.1287/isre.7.4.400 (1996).
Wayment, H. A. & Taylor, S. E. Self-evaluation processes: Motives, information use, and self-esteem. J. Pers. 63, 729–757 (1995).
Stankov, L., Lee, J., Luo, W. & Hogan, D. J. Confidence: A better predictor of academic achievement than self-efficacy, self-concept and anxiety?. Learn. Individ. Differ. 22, 747–758 (2012).
Crocker, J. & Park, L. E. The costly pursuit of self-esteem. Psychol. Bull. 130, 392–414 (2004).
Judge, T. A., Bono, J. E., Ilies, R. & Gerhardt, M. W. Personality and leadership: A qualitative and quantitative review. J. Appl. Psychol. 87, 765–780 (2002).
Morris, J. Peer assessment: A missing link between teaching and learning? A review of the literature. Nurse Educ. Today 21, 507–515 (2001).
Triaridis, S. & Kyrgidis, A. Peer review and journal impact factor: The two pillars of contemporary medical publishing. Hippokratia 14, 5–12 (2010).
Sinaga, Y. D. K., Arliani, E., Ngala, J. C. & Agustina, N. L. I. T. Accuracy of self-assessment and peer assessment in learning: A systematic literature review. J. Paedagogy 11, 312–322 (2024).
Liebenow, L. W., Schmidt, F. T. C., Meyer, J. & Fleckenstein, J. Self-assessment accuracy in the age of artificial Intelligence: Differential effects of LLM-generated feedback. Comput. Educ. 237, 105385 (2025).
Alemdag, E. & Narciss, S. Promoting formative self-assessment through peer assessment: peer work quality matters for writing performance and internal feedback generation. Int. J. Educ. Technol. High. Educ. 22, 1–26 (2025).
To, J. & Panadero, E. Peer assessment effects on the self-assessment process of first-year undergraduates. Assess. Eval. High. Educ. 44, 920–932 (2019).
Eysenbach, G. Improving the quality of web surveys: The checklist for reporting results of internet E-surveys (CHERRIES). J. Med. Internet Res. 6, 34 (2004).
Passmore, C., Dobbie, A. E., Parchman, M. & Tysinger, J. Guidelines for constructing a survey. Fam. Med. 34, 281–286 (2002).
Leung, W.-C. How to design a questionnaire. BMJ 322, 0106187 (2001).
Lavrakas, P. Nonprobability Sampling. Encyclopedia of Survey Research Methods vol. 0 (Sage Publications, Inc., 2455 Teller Road, Thousand Oaks California 91320 United States of America, 2008).
Lund, B. The questionnaire method in systems research: An overview of sample sizes, response rates and statistical approaches utilized in studies. VINE J. Inf. Knowl. Manag. Syst. 53, 1–10 (2023).
Ahmed, S. & Yessirkepov, M. Peer reviewers in Central Asia: Publons based analysis. J. Korean Med. Sci. 36, 1–8 (2021).
Patterson, M. S. & Harris, S. The relationship between reviewers’ quality-scores and number of citations for papers published in the journal Physics in Medicine and Biology from 2003–2005. Budapest Sci. 80, 345–351 (2009).
Papinczak, T., Young, L. & Groves, M. Peer assessment in problem-based learning: A qualitative study. Adv. Heal. Sci. Educ. 12, 169–186 (2007).
Freda, M. C., Kearney, M. H., Baggs, J. G., Broome, M. E. & Dougherty, M. Peer reviewer training and editor support: Results from an international survey of nursing peer reviewers. J. Prof. Nurs. 25, 101–108 (2009).
Lippi, G. How do I peer-review a scientific article?—a personal perspective. Ann. Transl. Med. 6, 1–7 (2018).
Davis, C. H. et al. Reviewing the review: A qualitative assessment of the peer review process in surgical journals. Res. Integr. Peer Rev. 3, 1–5 (2018).
García-Doval, I. Training and experience of peer reviewers: Is being a ‘good reviewer’ a persistent quality?. PLoS Med. 4, 595–601 (2007).
Callaham, M. L. & Tercier, J. The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Med. 4, 1–9 (2007).
Moher, D. et al. An international survey and modified Delphi process revealed editors’ perceptions, training needs, and ratings of competency-related statements for the development of core competencies for scientific editors of biomedical journals. F1000Research. 6, 1–16 (2017).
Curtin, P. A., Russial, J. & Tefertiller, A. Reviewers’ perceptions of the peer review process in journalism and mass communication. J. Mass Commun. Q. 95, 278–299 (2018).
Raniga, S. B. Decline to review a manuscript: Insight and implications for AJR reviewers, authors, and editorial staff. Am. J. Roentgenol. 214, 723–726 (2020).
Acknowledgements
This work was supported by the Portuguese Foundation for Science and Technology (FCT), I.P., under project https://doi.org/10.54499/2022.13474.BD. Additional financial support was provided by UID/00027 - Artificial Intelligence and Computer Science Laboratory (LIACC), funded by national funds through FCT/MCTES (PIDDAC). The first author is supported by FCT under grant PD/BD/2022.13474.BD. We are also grateful for the valuable feedback provided by the two anonymous reviewers.
Funding
Fundação para a Ciência e a Tecnologia, PD/BD/2022.13474.BD, Artificial Intelligence and Computer Science Laboratory - LIACC - funded by national funds through the FCT/MCTES (PIDDAC),UID/00027
Author information
Authors and Affiliations
Contributions
AS: Conceptualization, Investigation, Methodology, Writing—original draft, Writing—Review & Editing. AO: Formal analysis, Writing—Review & Editing. AL: Visualization, Writing—Review & Editing. AR: Supervision, Writing—Review & Editing. LPR: Supervision, Writing—Review & Editing.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
As the study did not involve sensitive data, no ethical clearance was necessary.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Sizo, A., Oliveira, A., Lino, A. et al. Exploring reviewer self-assessment in the context of academic peer review. Sci Rep 15, 38604 (2025). https://doi.org/10.1038/s41598-025-22352-0
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-22352-0