Fig. 3: Comparing scientific outcomes between cross-council and within-council investigators. | Communications Physics

Fig. 3: Comparing scientific outcomes between cross-council and within-council investigators.

From: Interdisciplinary researchers attain better long-term funding performance

Fig. 3

a An illustrative example of cross-council (orange) and within-council (blue) principal investigators (PIs). Both PIs obtained 3 research grants during the observation window from 2006 to 2013, but the within-council PI received grants from the same research council (all three from Economic and Social Research Council (ESRC)), while the cross-council received grants from 2 different councils (two from Engineering and Physical Sciences Research Council (EPSRC), one from Biotechnology and Biological Sciences Research Council (BBSRC)). b Matching the cross-council and within-council PIs with similar career profiles in terms of funding performance. We match 5 different characteristics for PIs: institutional ranking of a given PI (whereby institutions are ranked by their total amount of funding between 2006 and 2018), the number of grants a given PI has received, their average grant value, average team size, and average project duration. There is no statistically significant difference between the two groups of PIs across the five dimensions following the pairing. The shaded areas represent 95% confidence intervals. c Differences in research outcomes between cross-council and within council PIs on the average number of papers reported per project, the average number of total citations received per grant (calculated as the average of the total citations received by papers associated with a grant), and the average number of citations received per paper per grant (calculated first as the average of the citations received by papers associated with a grant, and then averaged over the total number of grants awarded to a PI). Citations are considered within 5 years after publication, and have been normalized by the average citations of all papers belonging to the same year and discipline in Microsoft Academic Graph dataset. All dimensions considered in panels b and c (with the exception of institutional ranking and number of grants) are quantified by calculating their percentile rank in the same council and year. The significance levels shown refer to t-tests and Kruskal-Wallis tests. ***p < 0.01, **p < 0.05, *p < 0.1. Error bars represent the standard error of the mean.

Back to article page