Abstract
The ability to accurately monitor the quality of one’s choices, or metacognition, improves under speed pressure, possibly due to changes in post-decisional evidence processing. Here, we investigate the neural processes that regulate decision-making and metacognition under speed pressure using time-resolved analyses of brain activity recorded using electroencephalography. Participants performed a motion discrimination task under short and long response deadlines and provided a metacognitive rating following each response. Behaviourally, participants were faster, less accurate, and showed superior metacognition with short deadlines. These effects were accompanied by a larger centro-parietal positivity (CPP), a neural correlate of evidence accumulation. Crucially, post-decisional CPP amplitude was more strongly associated with participants’ metacognitive ratings following errors under short relative to long response deadlines. Our results suggest that superior metacognition under speed pressure may stem from enhanced metacognitive readout of post-decisional evidence.
Similar content being viewed by others
Introduction
A remarkable feature of human cognition is the ability to monitor and control one’s own cognitive processes1,2. This ability, termed metacognition, allows individuals to adaptively guide their behaviour, even in the absence of explicit feedback3,4. For instance, in the domain of decision-making, low confidence in the accuracy of one’s choice might lead to a change of mind or a change in response strategy to avoid making further errors5,6,7,8. Importantly, such metacognitive behaviours generally correlate with objective performance (i.e., people show metacognitive sensitivity), such that confidence is low following incorrect choices9, and changes of mind tend to improve overall choice accuracy10. For example, in a previous by Pescetelli et al.7, participants judged which of two briefly presented boxes contained more dots. On some trials, after making their initial judgement, participants could ask for advice from a “virtual advisor”. Participants were more likely to ask for advice following an error or if they had low confidence in their response, and were more likely to change their response to the correct option following advice. These results demonstrate the utility of metacognition for improving performance. However, past research has found that metacognitive sensitivity depends on contextual factors such as response speed, with participants showing greater sensitivity when required to respond quickly11,12,13,14,15. In the current study, we investigated whether improvements in metacognitive sensitivity under speed pressure are related to changes in how sensory evidence is processed, as revealed by time-resolved analysis of electroencephalographic (EEG) data.
Decision-making is thought to unfold via the gradual accumulation of evidence toward a decision threshold16. When enough evidence has accumulated to reach the threshold, a response is triggered (although see ref. 17). One prominent explanation for metacognitive ability is that decision-makers are sensitive to the amount of accumulated evidence for their choices and use this to guide their metacognitive judgements. Support for this explanation has come from several computational models of decision-making. For example, the balance-of-evidence hypothesis proposes that decision confidence is based on the difference between two racing accumulators reflecting competing choice alternatives18,19. The larger the difference between accumulators at the point that the winning accumulator crosses the decision threshold, the more confidence individuals should have in their choice. More recently, several studies have suggested that evidence accumulation can continue after the initial decision threshold has been crossed15,20,21,22,23. Post-decisional accumulation of evidence that is inconsistent with one’s original choice has been linked to a range of corrective metacognitive behaviours, including changes of mind21,24,25,26,27, error detections28, and response strategy adjustments8. Indeed, it has been proposed that the accuracy of one’s metacognitive judgements reflects the strength of post-decisional evidence accumulation relative to the strength of pre-decisional evidence accumulation29. That is, when post-decisional evidence accumulation relatively high, metacognitive sensitivity increases.
Studies using EEG have provided support for the relationship between evidence accumulation and metacognitive judgements by tracking the neural signatures of evidence accumulation in the human brain22,28,30,31,32,33,34,35. The centroparietal positivity (CPP) is a positive deflection in the event-related potential (ERP) waveform which exhibits accumulate-to-bound dynamics consistent with a neural “decision variable”36,37. Prior to response execution, the CPP is positively correlated with confidence ratings, indicating that subjective feelings of certainty in part reflect the level of pre-decisional evidence accumulation30,31,32,33,34. We note here that the CPP closely resembles, and may be functionally equivalent to, the stimulus-locked P300 component37,38. After responding, a second positive ERP deflection is observed over centro-parietal electrodes which negatively correlates with confidence reports and predicts error detections22,28,35. This post-decisional component, known as the error positivity (Pe), is thought to reflect accumulation of “error evidence” (i.e., evidence for having made a mistake), and indicates that metacognitive judgements are also driven by post-decisional evidence processing39. While the CPP, P300, and Pe have been differentiated in the literature, their polarities and topographies overlap considerably (O’Connell et al.37; Ullsperger et al., 2010). As such, for simplicity, hereafter we refer to these components as the pre- and post-decisional CPP to emphasise their functional similarity as neural correlates of evidence accumulation27,28.
A common constraint on decision-making is time pressure. When we are required to respond quickly, our decisions are less accurate; a behavioural effect known as the speed-accuracy trade-off40. In addition to affecting accuracy, response speed also influences metacognitive judgements, such that metacognitive sensitivity is greater for faster responses29. One hypothesis for how speed pressure influences metacognitive sensitivity is that it alters the relative contribution of pre-and post-decisional evidence used to form metacognitive judgements. Early work by Baranski and Petrusic11 found that reaction times for metacognitive judgements were longer and varied according to the level of confidence expressed when participants emphasised response speed, but not accuracy, indicating elaborated post-decisional processing in the speeded condition (for similar results see ref. 15). Baranski and Petrusic11 proposed that when participants have more time to respond, confidence is determined in parallel with the unfolding decision, allowing within-trial strategy adjustments (i.e., more time is taken when individuals are uncertain). Conversely, when decisions are rushed, the computation of confidence is delayed until the post-decisional period. This shift toward post-decisional confidence computation results in more accurate error detection under speed stress due to extended processing of post-decisional information.
In addition to a shift toward post-decisional processing, another important explanation for superior metacognitive sensitivity under speed pressure relates to the type of errors made when decision-makers’ choices are rushed. Under speed pressure, errors are more likely to result from premature responding29,41. Such “motoric” errors can generate an internal conflict signal arising from the comparison of the enacted response and the intended response41,42, which can serve as error evidence signalling the need for behavioural adjustment43,44,45. Conversely, without speed-pressure, errors are more likely to result from low quality sensory information29. Such “perceptual errors” are harder to identify than motoric errors because there is less informative post-decisional evidence stemming from either the external sensory input20 or internal sources. In a study by Steinhauser and Yeung35, participants performed a perceptual discrimination task in which they judged which of two squares was brighter under instructions to emphasise either response speed or accuracy. Following each response, participants could indicate whether they believed they had made an error. The authors found that post-decisional CPP amplitude following incorrect responses was larger under speed emphasis than under accuracy emphasis, in line with greater post-decisional evidence accumulation for speeded decisions. Moreover, at the single-trial level, the magnitude of the error signal predicted the occurrence of error reports, establishing a critical link with behaviour35.
Collectively, the literature reviewed here suggests that superior metacognitive sensitivity under speed pressure is driven by greater reliance on post-decisional evidence. In the current study, we aimed to test this assertion using time-resolved EEG analysis. We recruited participants to perform a motion discrimination task under short and long response deadlines. After each response, participants rated their desire to change their mind on a continuous scale (CoM scores), providing a measure of metacognitive sensitivity. As the CPP indexes neural evidence accumulation, the relationship between the CPP and metacognitive judgements can be taken as a measure of how strongly, and at what time points, accumulated evidence is read out in metacognitive behaviours. Thus, we used the CPP-CoM score relationship to examine whether speed pressure altered the metacognitive readout of accumulated evidence throughout the decision process. Additionally, we used the CPP to examine whether speed pressure increased post-decisional evidence accumulation following errors, possibly reflecting stronger evidence for having made a mistake. In line with past research, we expected that, first, participants would produce faster but more error-prone responses under speed pressure (i.e., with short response deadlines), but that their metacognitive sensitivity would improve. Second, if speed pressure increases reliance on post-decisional evidence for metacognitive judgements11, we expected to observe a stronger relationship between post-decisional CPP amplitude and CoM scores under short relative to long deadlines. Third, if speed pressure produces stronger post-decisional evidence for having made a mistake, we expected to observe greater post-decisional CPP amplitudes following errors, relative to correct responses, under short relative to long deadlines. In line with our first hypothesis, we observed a significant speed-accuracy trade-off and greater metacognitive sensitivity under short deadlines. Crucially, post-decisional CPP amplitude was a stronger predictor of CoM scores following errors under short relative to long deadlines, supporting our second hypothesis. However, the modulation of post-decisional CPP amplitude by response accuracy was comparable between deadline conditions, which was inconsistent with our third hypothesis. Our results thus identify that speed pressure alters the metacognitive readout of post-decisional evidence, providing a plausible neural mechanism for metacognitive improvement.
Results
Participants (N = 43) completed a motion discrimination task in which they indicated whether a patch of dots moved toward the orange or blue side of an oriented Gabor (see Fig. 1) under bespoke short or long response deadlines (mean duration (SD): 0.69 s (0.13) and 1.15 s (0.14), respectively). We manipulated task difficulty by varying the motion coherence (high vs. low) and motion offset (large vs. small) from the Gabor’s orientation. These manipulations were not of primary interest, but their inclusion allowed us to confirm that the CPP we measured was a plausible correlate of neural evidence accumulation. After each response, participants rated their desire to change their mind on a continuous scale (CoM score) ranging from 0, “Not at all”, to 1, “Completely”, thus providing a measure of metacognitive ability.
a Illustration of the trial sequence during the main task. Trials started with a period of dot motion randomly drawn from a uniform distribution between 350–500 ms. Next, the dots moved coherently until participant response or the response deadline. Following the motion discrimination response, the dots stopped moving, became transparent (70% opacity) and a fixation cross appeared for 600 ms. Finally, the change-of-mind (CoM) display appeared for maximally 2000 ms, during which time participants rated their desire to change their mind about their motion discrimination response by moving the yellow marker along the continuous scale. b Task difficulty manipulations. During the coherent motion period, the coherence of the dot motion (i.e., the probability that the dots would move in the specified direction) and its angular offset from the criterion orientation were pseudo-randomly selected per trial to induce greater uncertainty (i.e., low coherence and/or small offset) or lesser uncertainty (i.e., high coherence and/or large offset). c Method for determining response deadlines during the main task. Participants completed a calibration task immediately prior to the main task to generate a per-participant response time distribution. The 25th and 90th percentiles of this distribution were then used as the short and long response deadlines during the coherent motion period of the main task, respectively. d Histogram of the short (aqua) and long (purple) response deadlines used during the main experimental task (N = 43).
Motion discrimination judgements
We first confirmed that our task manipulations produced the expected effects on participants’ behavioural performance. Participants were more accurate on long relative to short deadline trials (M/SD (%) = 72.0/10.2 vs. 64.7/10.6; F(1, 42) = 82.02, p < 0.001, \({\eta }_{p}^{2}=0.67\)), high relative to low coherence trials (70.4/10.6 vs. 66.5/9.6; F(1, 42) = 101.90, p < 0.001, \({\eta }_{p}^{2}=0.71\)), and large relative to small offset trials (72.6/11.8 vs. 64.3/8.5; F(1, 42) = 145.90, p < 0.001, \({\eta }_{p}^{2}=0.78\)), indicating improved performance with longer deliberation time and reduced task difficulty (see Fig. 2a). There was also a significant Deadline by Offset interaction (F(1, 42) = 14.93, p < 0.001, \({\eta }_{p}^{2}=0.26\)). Follow-up tests revealed that the effect of Offset on task accuracy was larger on long deadline trials (Mdiff = 9.7, F(1, 42) = 227.0, p < 0.001, \({\eta }_{p}^{2}=0.84\)) than on short deadline trials (Mdiff = 6.8, F(1, 42) = 56.90, p < 0.001, \({\eta }_{p}^{2}=0.56\)).
Boxplots (solid lines) and individual participant data (semitransparent circles, N = 43) showing (a) accuracy and (b) reaction time for motion discrimination judgements. Upper and lower whiskers extend from the interquartile range to the maximum and minimal values of the data, respectively. Filled circles denote group means. Between-participant variability was removed for visualisation using previously described methods77.
Confirming the presence of a speed-accuracy trade-off, responses were significantly faster on short relative to long deadline trials (M/SD (s) = 0.493/0.106 vs. 0.673/0.127; F(1, 42) = 277.86, p < 0.001, \({\eta }_{p}^{2}=0.87\); see Fig. 2b). Responses were also significantly faster on high relative to low coherence trials (0.580/0.110 vs. 0.589/0.111; F(1, 42) = 40.18, p < 0.001, \({\eta }_{p}^{2}=0.49\)), and large relative to small offset trials (0.579/0.108 vs. 0.590/0.113; F(1, 42) = 34.16, p < 0.001, \({\eta }_{p}^{2}=0.45\)). These effects were qualified by a significant three-way Deadline by Coherence by Offset interaction (F(1, 42) = 7.23, p = 0.010, \({\eta }_{p}^{2}=0.15\)). Follow-up tests revealed that there was a significant Coherence by Offset interaction on long deadline trials (F(1, 42) = 8.79, p = 0.005, \({\eta }_{p}^{2}=0.17\)) but not on short deadline trials (F(1, 42) < 0.1, p = 0.992, \({\eta }_{p}^{2} < 0.01\)). Further follow-up tests revealed that, on long deadline trials, there was a significant effect of Coherence for large offset trials (F(1, 42) = 38.0, p < 0.001, \({\eta }_{p}^{2}=0.48\)), with participants responding significantly faster for high relative to low coherence trials. The effect of Coherence was also significant, but smaller in magnitude, for small offset trials (F(1, 42) = 5.06, p = 0.030, \({\eta }_{p}^{2}=0.11\)). On short deadline trials, the effects of Coherence and Offset were both significant, such that participants responded significantly faster for high relative to low coherence trials (M/SD = 0.490/ 0.106, vs. 0.496/0.106; F(1, 42) = 22.10, p < 0.001, \({\eta }_{p}^{2}=0.35\)), and for large relative to small offset trials (M/SD = 0.491/ 0.105, vs. 0.495/0.107; F(1, 42) = 8.52, p = 0.006, \({\eta }_{p}^{2}=0.17\)).
Metacognitive judgements
We next examined how the task manipulations impacted participants’ subjective evaluations of their decisions. In line with reduced accuracy, participants had higher CoM scores on short relative to long deadline trials (M/SD = 0.239/0.107 vs. 0.173/0.085; F(1, 42) = 45.27, p < 0.001, \({\eta }_{p}^{2}=0.52\)), low relative to high coherence trials (0.213/0.093 vs. 0.199/0.091; F(1, 42) = 15.12, p < 0.001, \({\eta }_{p}^{2}=0.27\)), and small relative to large offset trials (0.216/0.092 vs. 0.196/0.094; F(1, 42) = 11.33, p =0.002, \({\eta }_{p}^{2}=0.21\); see Fig. 3a).
Boxplots (solid lines) and individual participant data (semitransparent circles, N = 43) showing (a) raw change-of-mind (CoM scores), (b) CoM difference scores (error—correct), and (c), CoM reaction time (time between onset of CoM stimulus and CoM response). Conventions as in Fig. 2.
When considering CoM scores, however, it is important to account for the accuracy of participants’ motion discrimination decisions. To do so, we subtracted average CoM scores on correct trials from those on error trials and repeated the above analysis. Larger difference scores (CoMdiff) thus reflect greater separation between correct and error responses or better metacognitive sensitivity. As hypothesised, CoMdiff scores were greater for short relative to long deadline trials (0.363/0.211 vs. 0.312/0.191; F(1, 42) = 11.21, p = 0.002, \({\eta }_{p}^{2}=0.21\); see Fig. 3b), indicating superior metacognitive sensitivity despite worse initial accuracy. Participants also had larger CoMdiff scores on high relative to low coherence trials (0.381/0.214 vs. 0.294/0.178; F(1, 42) = 113.78, p < 0.001, \({\eta }_{p}^{2}=0.73\)), and on large relative to small offset trials (0.432/0.238 vs. 0.243/0.156; F(1, 42) = 149.50, p < 0.001, \({\eta }_{p}^{2}=0.78\)). There was also a significant two-way interaction between Coherence and Offset (F(1, 42) = 6.78, p = 0.013, \({\eta }_{p}^{2}=0.14\)). Follow-up tests revealed that the effect of Coherence on CoMdiff scores was greater for large offset trials (Mdiff = 0.102; F(1, 42) = 95.2, p < 0.001, \({\eta }_{p}^{2}=0.69\)) than for small offset trials (Mdiff = 0.073; F(1, 42) = 60.0, p < 0.001, \({\eta }_{p}^{2}=0.59\)). To further investigate the superior metacognitive sensitivity under speed pressure, we ran an additional two-way (Deadline, initial accuracy) repeated-measures ANOVA on CoM scores. As there were no significant interactions between Deadline and Coherence or Offset conditions for either CoM scores or CoMdiff scores, we excluded Coherence and Offset from this analysis. While CoM scores were higher on short deadline trials overall, this effect was greater following errors (F(1, 42) = 27.30, p < 0.001, \({\eta }_{p}^{2}=0.39\)) than correct responses (F(1, 42) = 14.10, p = 0.001, \({\eta }_{p}^{2}=0.25\)), indicating that the superior metacognitive sensitivity observed on short deadline trials was due to participants’ better ability to detect errors.
Finally, we assessed participants’ CoM reaction times, recorded as the time from the onset of the CoM stimulus to the CoM response (i.e., not including the 600 ms delay between response execution and onset of the CoM stimulus). We found that CoM responses were significantly faster on short relative to long deadline trials (M/SD (s) = 0.433/0.172 vs. 0.503/0.170; F(1, 42) = 27.83, p < 0.001, \({\eta }_{p}^{2}=0.40\); see Fig. 3c). CoM responses were also faster on high relative to low coherence trials (0.464/0.166 vs. 0.472/0.165; F(1, 42) = 11.08, p = 0.002, \({\eta }_{p}^{2}=0.21\)), and for large relative to small offset trials (0.458/0.164 vs. 0.479/0.168; F(1, 42) = 26.306, p <0.001, \({\eta }_{p}^{2}=0.383\)).
Electroencephalography
Having established that the Deadline manipulation produced a robust speed-accuracy trade-off and associated changes in metacognitive sensitivity, we next explored its impact on the neural correlates of evidence accumulation. We analysed ERPs at a cluster of centro-parietal electrodes (i.e., Cz, CPz, Pz, CP1, CP2) to be consistent with previous research37,46,47. Interestingly, the shape of the CPP waveform we observed differed somewhat from previous studies in that neither the stimulus- nor the response-locked waveforms peaked at the response time. Rather, the stimulus-locked waveform peaked at ~300 ms post-stimulus onset before returning to baseline by the mean response time (see Fig. 4a). In the response-locked epoch, the CPP waveform peaked at ~100 ms post-response, followed by a second, smaller positive deflection at ~300 ms post-response, and a final positive deflection rising from ~400 ms post-response until the end of the epoch. These differences likely reflect specific aspects of our task design (see Discussion).
a Grand average CPP waveform. Purple circles in the topographic map at stimulus-onset denote the electrode cluster used in the ERP analysis (i.e., Cz, CPz, Pz, CP1, CP2). The dashed vertical line in the stimulus-locked epoch denotes grand average response time. b Parameter estimates for the effect of Coherence on CPP amplitude. Negative values indicate reduced CPP amplitude on low relative to high coherence trials. Dashed vertical lines denote mean response times (low coherence: orange, high coherence: black). c Parameter estimates for the effect of Offset on CPP amplitude. Negative values indicate reduced CPP amplitude on small relative to large offset trials. Dashed vertical lines denote mean response times (small offset: indigo, large offset: green). d Parameter estimates for the effect of Deadline on CPP amplitude. Positive values indicate greater CPP amplitude on short relative to long deadline trials. Dashed vertical lines denote mean response times (short deadline: cyan, long deadline: purple). In all panels, black horizontal bars denote periods of statistical significance (p < 0.05, N = 40). All statistical tests were FDR corrected for multiple comparisons. Error bands denote standard error. Data were smoothed using a Gaussian window (SD = 16 ms) for visualisation only.
Our first ERP analysis confirmed the CPP waveform we recorded was responsive to the task difficulty manipulations, as would be expected from a neural correlate of decision-making. We observed a significant effect of Coherence, with lower amplitude on low relative to high coherence trials from ~300–400 ms post-stimulus onset and from ~−200 to +50 ms relative to the response (Fig. 4b). There was also an effect of Offset, with significantly lower CPP amplitude on small relative to large offset trials from ~420–480 ms post-stimulus onset, and from ~−300 to +300 ms relative to the response (Fig. 4c). Moreover, the CPP was significantly modulated by speed pressure, with greater amplitude on short relative to long deadline trials from ~450–600 ms post-stimulus onset and throughout most of the response-locked epoch (Fig. 4d). This latter result suggests that speed-pressure exerts changes on both pre- and post-decisional evidence processing, which may have contributed to the differences in metacognitive performance between deadline conditions (see “Discussion”). There were no periods in either the stimulus- or response-locked epochs in which Coherence, Offset, or Deadline interacted with one another.
We next investigated the extent to which the CPP differentiated between correct and incorrect responses under different response deadlines. If decision-makers are sensitive to the amount of accumulated evidence for their choices and use this information to guide their metacognitive judgements, it follows that superior metacognitive sensitivity may stem from greater discriminability of response accuracy by CPP amplitude under speed pressure.
In the stimulus-locked epoch, response errors were associated with significantly lower CPP amplitude relative to correct responses for both response deadlines (Fig. 5a, left panel). This effect emerged consistently on short deadline trials from ~350–600 ms post-stimulus onset and from ~400–700 ms post-stimulus onset on long deadline trials. There were also several shorter periods of statistical significance on long deadline trials at earlier time points. In the response-locked epoch, response errors were again associated with significantly reduced CPP amplitude for both deadline conditions. This effect emerged most strongly for several hundred milliseconds around the time of the response (Fig. 5a, right panel). However, for several periods from ~350 ms post-response, error trials were associated with significantly greater CPP amplitude on long deadline trials. There was also a brief period at ~400 ms post-response in which error trials were associated with greater CPP amplitude on short deadline trials. Finally, there was a brief period in which the interaction between response accuracy and response deadline was significant at ~170 ms post-response. At this time, the magnitude of the association between response errors and CPP amplitude was significantly more negative on short-deadline trials than on long-deadline trials.
a Parameter estimate for the effect of error commission on CPP amplitude at each level of response deadline. Negative values indicate reduced CPP amplitude on error trials relative to correct trials. b Parameter estimates for the relationship between CPP amplitude and CoM scores at each level of response deadline and response accuracy. Negative values indicate that greater CPP amplitude is associated with lower CoM scores, and positive values indicate that greater CPP amplitude is associated with higher CoM scores. In all panels, dashed vertical lines denote mean response time for the colour-matched condition, and solid horizontal bars denote periods of statistical significance (p < 0.05) for the colour matched condition (N = 40). Grey shading indicates periods with significant interaction effects (i.e., significant differences in parameter estimates between adjoining conditions). Error bands denote standard error. All statistical tests were FDR corrected for multiple comparisons. Data were smoothed using a Gaussian window (SD = 16 ms) for visualisation only.
These results demonstrate that CPP amplitude differentiated between objective response accuracy. Importantly, however, the magnitude and time course of this effect were comparable under short and long response deadlines. This suggests that superior metacognitive sensitivity under speed pressure may not stem from greater discriminability of response accuracy by CPP amplitude.
Having observed that CPP amplitude differentiated between objective response accuracy to a similar extent under both response deadlines, our final analysis examined whether there were differences in how the CPP contributed to participants’ metacognitive judgements under speed pressure. Specifically, we examined the association between pre- and post-decisional CPP amplitude and CoM scores at each level of response deadline and response accuracy. If speed pressure increases reliance on post-decisional evidence for metacognitive judgements, it follows that the relationship between CPP amplitude and CoM scores might be stronger under short relative to long deadlines.
In the stimulus-locked epoch, we found a significant negative relationship between CoM scores and CPP amplitude for long deadline, correct trials from ~500–700 ms post-stimulus onset (Fig. 5b, left panel). That is, greater pre-decisional CPP amplitude was associated with lower CoM scores. On short deadline trials, the relationship was also significantly negative for correct responses from ~350–630 ms post-stimulus onset. For incorrect responses, there was a significant positive relationship on long deadline trials from ~480–520 ms post-stimulus onset, indicating that greater CPP amplitude was associated with higher CoM scores. The relationship between CPP amplitude and CoM scores on error trials was not significant under short deadlines.
In the response-locked epoch, there were again significant negative relationships between CoM scores and CPP amplitude on correct trials for both deadline conditions around the time of the response (Fig. 5b, right panel) These relationships returned to baseline by ~180 ms post-response and remained non-significant for the rest of the epoch. On error trials, however, there were strong positive relationships between post-decisional CPP amplitude and CoM scores for both deadline conditions from ~150 ms post-response. Importantly, there was a period of significant interaction from ~420–460 ms post-response in which the strength of this relationship was stronger on short relative to long deadline trials.
Together, these results reveal that greater CPP amplitude was associated with less desire to change one’s mind on correct trials, and more desire to change one’s mind on error trials under both deadlines. Crucially, however, the positive relationship between post-decisional CPP amplitude and CoM scores on error trials was stronger under speed pressure. This result suggests that post-decisional evidence accumulation is related to more corrective metacognitive behaviour under speed pressure.
Discussion
We investigated the neural mechanisms of metacognitive improvement under speed pressure. To do so, we had participants perform a visual motion discrimination task under short and long response deadlines. Following each response, participants rated their desire to change their minds, providing a measure of metacognitive ability. We recorded brain activity using EEG, allowing us to assess the relationship between a neural correlate of evidence accumulation (i.e., the CPP) and participants’ metacognitive judgements. On short relative to long deadline trials, CPP amplitude was higher overall and was more predictive of subsequent metacognitive judgements in the post-decisional period. These results suggest that superior metacognition under speed pressure may reflect changes in evidence processing and enhanced readout of post-decisional evidence.
When considering participants’ motion discrimination responses, the deadline manipulation successfully induced a speed-accuracy trade-off, such that participants responded significantly faster but less accurately under short deadlines. Response deadline also interacted with the task difficulty manipulations to impact motion discrimination performance, such that the effects of Coherence and Offset tended to be greater on long deadline trials. This finding suggests that the effects of other manipulations are attenuated under speed pressure, as participants are forced to respond quickly irrespective of their level of readiness. When considering CoM judgements, participants’ desire to change their minds increased under speed pressure and when task difficulty increased, in accordance with past studies20,21,25,48. Intuitively, when decisions are rushed or the task is more difficult, participants make more errors and in turn attempt to correct those errors.
Importantly, analysis of CoMdiff scores revealed that participants were better able to judge the quality of their perceptual decisions under speed pressure, replicating previous reports11,12,13,14,15. This finding is notable because response accuracy is typically positively correlated with metacognitive sensitivity9. Indeed, within each response deadline, CoMdiff scores were larger for high coherence and large offset trials, reflecting improved metacognitive sensitivity with better motion discrimination performance. We found that increased metacognitive sensitivity under speed pressure was primarily driven by higher CoM scores on error trials. This finding replicates the work of Baranski and Petrusic11,13, who reported that increased sensitivity was due to superior calibration of low-confidence responses (i.e., under speed stress, participants’ low-confidence ratings more often reflected true response errors). Collectively, these results fit with the notion that errors made under speed should be easier to detect because they are more likely to result from premature responding, leading to superior metacognitive sensitivity overall29,41.
Interestingly, however, our results diverged from previous studies in relation to metacognitive response times11,14,15. We found that metacognitive sensitivity improved under speed pressure despite significantly faster CoM responses. This finding is contrary to previous theoretical explanations which proposed that speed pressure delays metacognitive processing until the post-decisional period, consequently prolonging metacognitive response times11,15. Our results instead suggest that extended processing time is not necessary for improved metacognition under speed pressure. Indeed, in one previous study14 the authors found that, while participants had longer metacognitive response times under speed pressure at the group level, this effect was only significant for a third of individual participants. Our results thus extend past work by finding significantly faster metacognitive response times under speed pressure even at the group level. This effect may relate to the fact that our task involved completing groupings of four short or long deadline blocks in a row, rather than alternating speed and accuracy blocks as used previously11,14,15. Participants in our study may therefore have entered into a speeded “task set”49 during short deadline blocks, leading to faster CoM responses as well as faster motion discrimination responses.
Following on from the behavioural results, we investigated the time-course of evidence accumulation as reflected by the CPP. Notably, despite selecting a cluster of centro-parietal electrodes which aligned with past studies22,30,31,38,50,51, we observed stimulus- and response-locked waveforms that did not align with mean response times, contrary to the expected profile of a neural decision variable. This finding may relate to distinct aspects of our task design, which we discuss below.
The stimulus-locked waveform and topography we observed are consistent with the P3a subcomponent, which is thought to reflect stimulus-driven attentional mechanisms52,53,54. In our task, this stimulus-locked component may reflect initial stimulus processing associated with the transition from random motion to coherent motion. The stimulus-locked component was followed by left-lateralised activity in the response-locked epoch, which is consistent with the motor demands of the task as participants exclusively responded using their right hand. The lateralised topography likely reflects the output of the evolving evidence accumulation process being fed into motor areas to guide motor preparation and execution55. Indeed, a number of recent computational models of decision-making have emphasised evidence accumulation and motor processing as continuous and overlapping, rather than serial stages as suggested by traditional models56,57,58. Importantly, when decisions are predictably mapped onto actions, as in the current study, activity in motor areas can also be taken to index evolving decisions58,59. The lateralised activity may also explain why the peak of the response-locked waveform, recorded at midline electrodes, occurred ~100 ms after the response itself (i.e., due to the time taken for the current to spread). Following the initial response-locked peak, we observed a smaller peak at ~300 ms, which may reflect the changing stimulus display. Finally, the positivity at the end of the response-locked epoch likely reflects a post-decisional phase of evidence accumulation leading to participants’ metacognitive judgements22,28,35. Interestingly, this final peak was not lateralised, but maximal over midline electrodes traditionally associated with the CPP. This finding likely reflects the fact that the post-decisional phase of evidence accumulation was separated temporally from the CoM response (i.e., as the CoM stimulus had not yet occurred). Thus, while the CPP waveform we observed differed somewhat from past studies, it plausibly captured pre- and post-decisional evidence accumulation, making it an appropriate measure for our analyses.
When considering the effect of response deadline on the CPP, we found that speed pressure led to a general increase in CPP amplitude across stimulus- and response-locked epochs. One explanation for this finding is that the CPP indexes both evidence accumulation and an evidence-independent urgency signal60,61. Urgency signals are proposed to dynamically reduce the level of evidence needed before a decision is reached over time, and manifest as elevated neural activity62. Importantly, a past modelling study63 found that the way speed pressure is implemented affects whether or not an urgency signal is applied during evidence accumulation. When response deadlines are used, as in the current study, behavioural outcomes are best explained by an urgency signal, whereas instructional cues (i.e., asking participants to emphasise either response speed or accuracy) influences the overall threshold level63. While few past studies have explored the consequences of speed pressure on CPP dynamics, one previous study60 investigated the CPP in participants categorised as either “early responders” or “late responders”. The authors found that the CPP signal was better fit by a computational model that included an urgency signal than one without and that early responders had a significantly greater urgency signal than late responders. Thus, the greater CPP amplitude we observed for short relative to long deadlines trials plausibly reflects the contribution of an urgency signal acting to ensure responses were made within the shorter deadline. Interestingly, although such an urgency signal should only relate to pre-decisional evidence accumulation (i.e. because there was no time pressure manipulation for CoM responses), we saw an effect of Deadline on CPP amplitude throughout the response-locked epoch. As such, it seems that post-decisional evidence accumulation is also modulated by initial response speed, which may have implications for how post-decisional evidence is read out in metacognitive judgements, which we discuss below.
In addition to the effect of response deadline, CPP amplitude differentiated responses according to objective accuracy. Specifically, we found that pre-decisional CPP amplitude was larger on correct trials, in line with greater evidence accumulation leading to improved behavioural performance30,36. After the response, CPP amplitude was instead greater on error trials, consistent with the idea that post-decisional evidence plays a qualitatively distinct role in decision-making and performance monitoring by providing evidence for having made a mistake35,39,64,65. Most notably, the effect of response accuracy on CPP amplitude was largely consistent across response deadlines for the pre-decisional period and was more pronounced on long deadline trials for the post-decisional period. This result is contrary to our hypothesis that we would see greater post-decisional CPP amplitude following errors under short relative to long deadlines. While a past study35 found that post-decisional CPP amplitude was larger following incorrect responses under speed emphasis, others have suggested that error signals should be greater when a decision-maker is prioritising response accuracy because errors under such conditions are more motivationally salient66,67. In line with this latter interpretation, our results may reflect greater salience of errors under long deadlines. Thus, although participants had superior error detection under speed pressure, this effect does not appear to have been driven by greater discriminability of response errors in neural signatures of evidence accumulation.
When considering the relationship between CPP amplitude and CoM scores, we found that, across both deadlines, greater pre-decisional CPP amplitude was associated with lower CoM scores on correct trials and greater post-decisional CPP amplitude was associated with higher CoM scores on error trials. Overall, this pattern is consistent with past studies22,27,31,33,34, and with the notion that decision-makers monitor internal signals of uncertainty (e.g., the strength evidence accumulation) to adaptively guide their future behaviour7. Critically, however, we also observed differences in the relationship between the CPP and CoM scores between deadline conditions. We found that post-decisional CPP amplitude was a significantly stronger predictor of CoM scores on short-deadline error trials. This finding supports our hypothesis and suggests that speed pressure does increase the contribution of post-decisional evidence to metacognitive judgements11. Importantly, this finding provides a potential neural mechanism underlying superior metacognitive sensitivity under speed pressure via enhanced metacognitive readout of post-decisional evidence. That is, our results suggest that a given amount of accumulated post-decisional evidence is read out as a greater desire to change one’s mind following errors under speed pressure, leading to superior error detection and thus superior metacognitive sensitivity. It is also worth noting that the relationship between CPP amplitude and CoM scores operates in conjunction with overall greater CPP amplitude under speed pressure. Thus, across all trial types, CPP amplitude may be read out as more extreme CoM scores under speed pressure (i.e., for a given strength of association, a larger predictor value results in a larger outcome value). This explanation fits with our finding that CoM scores were higher overall under speed pressure, even on correct trials.
While a number of studies have now reported an inverse relationship between post-decisional evidence accumulation and confidence22,27,28,35,47,68, this relationship has not always been observed34,69. For example, Voodla and Uusberg69 found no relationship between the post-decisional CPP and confidence in a decision-making task involving mental arithmetic. The authors concluded that performance monitoring in more complex tasks may rely on distinct neurocognitive mechanisms. In particular, for challenging decisions involving higher-order reasoning, metacognitive judgements may rely more on heuristics and external cues than accumulated evidence for having made a mistake, as decision-makers are less likely to have a representation of the correct response64,69,70. By contrast, for simple perceptual decisions, decision-makers often have access to additional information, such as newly arriving, contradictory sensory evidence20 or internal signals of response conflict41,42,65, which can provide informative post-decisional evidence. As such, our work aligns with a number of past studies investigating perceptual decision-making, but may not generalise across all decision contexts. The investigation of neural correlates of performance monitoring beyond simple perceptual decisions is thus an important area for future research.
In conclusion, in this study we investigated the neural mechanisms of metacognitive improvement under speed pressure. Our results provided mixed support for previous theoretical explanations. Namely, we found that while metacognitive judgements do not rely exclusively on post-decisional evidence under speed pressure, speed pressure does enhance the metacognitive readout of post-decisional evidence following errors11. Notably, this effect emerged in the absence of prolonged metacognitive response times, suggesting that extended post-decisional processing is not always necessary for metacognitive improvement11,14,15. Finally, response errors were not more discriminable by CPP amplitude under speed pressure, suggesting that metacognitive improvement is specifically related to differences in how evidence is read out in behaviour following correct and incorrect responses, rather than to differences in evidence accumulation itself. Collectively, our results provide new insights into neurocognitive processes underlying self-corrective behaviours.
Methods
Participants
Forty-four participants (31 females, Mage = 22.7 ± 3.82 years) were recruited from The University of Queensland’s online research portal in exchange for reimbursement ($20 AUD/h). One participant was excluded for noncompliance with task instructions and an additional three EEG data sets removed due to technical issues. Final sample sizes were thus 43 for behavioural analyses and 40 for analyses involving EEG. While no formal power analysis was conducted, our sample size is comparable with, or greater than, similar studies reported in recent literature (e.g., refs. 28,31). Participants self-reported normal or corrected-to-normal vision and no acute psychiatric or neurological illnesses. All participants provided written informed consent prior to participation and the study was approved by The University of Queensland Human Research Ethics Committee. All ethical regulations relevant to human research participants were followed.
Apparatus
Data were collected in a dark, acoustically and electrically shielded room. Participants were seated at a viewing distance of 56 cm with their head stabilised via a chinrest. The task was custom coded using the PsychoPy toolbox in Python71 and displayed on a 22.5” VIEWPixx monitor (resolution: 1920 × 1080 pixels; refresh rate: 100 Hz). EEG data were collected using a BioSemi ActiveTwo system with 64 Ag-AgCl electrodes arranged according to the 10–20 system. The sampling frequency was 1024 Hz. Electro-oculographic data were collected from four external electrodes placed above and below the left eye and on the outer canthi. An EyeLink 1000 eye tracker (SR Research) was used to record pupil diameter and eye movements.
Task design and procedure
Participants completed a 2AFC motion discrimination task. On each trial, participants indicated whether a patch of dots moved toward the orange or blue side of a Gabor patch at the centre of the screen (see Fig. 1a). The Gabor patch was divided in the middle into two blue- and orange-coloured halves. Participants indicated their response by pressing the left and right mouse buttons using their right hand and the response key-colour mapping was counterbalanced across participants. All participants indicated that their right hand was preferred when operating a computer mouse. The dot patch comprised 320 independently moving dots (diameter: 0.1° of visual angle (dva); dot life: 0.2 s) within a circular annulus (15.6 dva outer diameter, 3.12 dva inner diameter). The Gabor (1.2 dva in diameter) was positioned at the centre of the annulus. Each trial began with a period of random dot motion (i.e., K = 0; dot speed = 5 dva/s) lasting between 350–500 ms, randomly sampled from a uniform distribution. This was followed by a period of coherent motion (i.e., K = 0.44/0.66; dot speed = 6.5 dva/s) during which participants were required to respond.
To manipulate response speed, participants completed the task under short and long response deadlines (see Fig. 1c). Short and long response deadlines were set according to the 25th and 90th percentile, respectively, of each participant’s reaction time distribution gathered from a calibration session completed immediately prior to the main experimental session. During the calibration session, participants completed four blocks of the motion discrimination task with a response deadline of 1500 ms. Each block lasted until participants recorded 48 correct trials. If participants made an error (M/SD per participant per block = 33/39) or failed to respond (M/SD = 7/7), the trial was repeated at the end of the block. During the main experimental session, the long and short deadlines were implemented using a counterbalanced ABAB design, with each run of the Deadline condition lasting four blocks (16 blocks total). Each block comprised 80 trials (1280 trials in total). At the beginning of the short deadline blocks, participants were instructed: “In this section, you will have a shorter response deadline. You will need to respond quickly to ensure you do not miss the deadline.” At the beginning of the long deadline blocks, participants were instructed: “In this section, you will have a longer response deadline. Use this extra time to be as accurate as possible.”
In keeping with previous studies of motion discrimination [e.g., refs. 20,30,72] and work from our own group27, we also manipulated task difficulty by varying motion coherence and motion offset from the Gabor’s orientation (i.e., the criterion; see Fig. 1b). While these manipulations were not of primary interest, their inclusion allowed us to confirm that the CPP we measured was a plausible correlate of neural evidence accumulation. The coherence and offset manipulations were implemented using the K and µ parameters, respectively, of a von Mises probability distribution. That is, the motion signals were generated by independently updating the position of each dot on each frame by randomly sampling a displacement angle from a von Mises probability distribution defined by these parameters. The motion coherence was pseudo-randomly chosen per trial to be either high (K = 0.66) or low (K = 0.44), and the offset value was pseudo-randomly chosen per trial to be either large (µ = criterion orientation ±30°) or small (µ = criterion orientation ±15°). The criterion orientation was pseudo-randomly selected per block of trials from one of four values (i.e., 0/180°, 45/225°, 90/270°, or 135/315°). Thus, in a given block, there were eight possible motion directions (e.g., for a criterion of 90/270°, motion directions could be 60°, 75°, 105°, 120°, 240°, 255°, 285°, or 300°). This ensured that, over the course of the experiment, each possible motion direction was counterbalanced at both levels of motion coherence and angular offset. For example, a motion direction of 60° represents an offset of −30° in blocks with a criterion of 90/270°, but an offset of +15° in blocks with a criterion of 45/225°, thereby dissociating motion signals from expected responses and task difficulty.
After each response in the main experimental session, the dots stopped moving, became transparent (70% opacity), and a fixation cross appeared at the Gabor’s location for 600 ms. The fixation cross was followed by a change-of-mind (CoM) stimulus. The CoM stimulus consisted of a Likert-style scale with 10 white, vertical notches arranged horizontally either side of a “?” at the centre of the display. Using the computer mouse, participants were instructed to move a yellow disk along the scale and left-click to indicate how much they would like to change their mind about their previous motion discrimination response, with possible responses ranging from 0, “Not at all” (leftmost position), to 1, “Completely” (rightmost position). The yellow disk always started at the middle of the scale (i.e., 0.5). Participants were instructed: “If you would like to change your mind, move the yellow circle with the mouse to the right side of the scale. If you would not like to change your mind, move the yellow circle with the mouse to the left side of the scale. You may move the yellow circle to any position on the scale. The further you move it, the stronger your conviction about whether to change or keep your initial response.” The CoM stimulus lasted for maximally 2 s, followed by the beginning of the next trial. To incentivise accurate metacognitive judgements, participants could earn up to an additional $10 AUD based on their CoM responses over the course of the experiment. Points increased linearly as the CoM response moved from one end of the scale to the other, with the direction of reward determined by participants’ initial response accuracy. For example, following an error, points increased from left (i.e., no change of mind) to right (i.e., a change of mind), reflecting increasing metacognitive sensitivity. Following a correct response, points increased from right to left.
If participants failed to respond before the deadline, an exclamation mark (“!”) appeared at the display centre for 1000 ms and the missed trial was repeated at the end of the block (M/SD per participant per block = 11/9). Following each block, participants were provided with feedback about their average accuracy, reaction time, and points accumulated during that block. Prior to the calibration session, participants completed eight blocks of 20 practice trials (160 trials in total) to familiarise themselves with the task. The task was a 2 (long vs short deadline) × 2 (low vs high motion coherence) × 2 (small vs large angular offset) fully within-subjects design.
Statistics and reproducibility
Trials with missing motion discrimination responses (11% of all trials) or reaction times of less than 0.15 s (1.4% of responses) were first removed from further analysis. Trials with missing CoM responses were removed from analyses involving CoM scores (0.5% of trials with responses > 0.15 s). Behavioural dependent variables were analysed using three-way (Deadline, Coherence, and Offset) repeated-measures ANOVAs. Partial eta squared \(({\eta }_{p}^{2})\) is reported as a measure of effect size. In addition to analysing raw CoM scores, we also analysed CoM difference scores (CoMdiff) computed by subtracting average CoM values for correct trials from those on error trials. Larger difference scores reflect greater metacognitive sensitivity, as participants are better able to discriminate between error and correct response. Significant interactions were followed up with pairwise tests adjusted using the false discovery rate (FDR) correction for multiple comparisons73. For completeness, we repeated the above analyses with the addition of Criterion Orientation (cardinal vs. oblique) as a fourth independent variable. Inclusion of Criterion Orientation did not produce any qualitative changes in the results and so these results are not reported in the text.
Pre-processing of EEG data was undertaken using the MNE toolbox in Python74. Data were first re-referenced offline to the average reference, before being band-pass (0.1–99 Hz) and notch filtered (50 Hz). The FASTER algorithm75 was used for automated artifact rejection. Stimulus-locked epochs were extracted from the continuous data using the time window of −0.100 to 2100 ms relative to onset of coherent motion. The epochs were then baseline corrected (−0.100 to 0.0 ms pre-coherent motion), linearly detrended, and down-sampled to 256 Hz. Bad epochs were removed using the FASTER algorithm (M/SD = 66/23 across participants, range: 32–120). To generate response-locked epochs which shared the pre-stimulus baseline correction, stimulus-locked data were aligned to the response onset and epochs were extracted using the time window of −0.300 to 600 ms relative to response. Given the difference in mean response times between the long and short deadline conditions, stimulus-locked analysis only included data up to response onset on each trial. This ensured that comparisons could be made between deadline conditions without the influence post-decisional changes in the EEG signal, albeit at the expense of greater variability in the parameter estimates for short-deadline trials towards the end of the stimulus-locked epoch.
ERP analyses were conducted using a cluster of five centro-parietal electrodes (i.e., Cz, CPz, Pz, CP1, CP2) selected on the basis of previous studies investigating evidence accumulation in the form of the stimulus-locked centro-parietal positivity (CPP) and P300, and the response-locked Pe22,30,31,38,50,51. ERPs were analysed using three separate time-resolved general linear mixed-effects models. First, we assessed how response speed and task difficulty impacted CPP dynamics by including fixed effects for Deadline, Coherence, Offset, and all their high-order interactions. We also included by-participant random intercepts and, for completeness, the fixed effect of Criterion Direction, though parameter estimates for these effects are not reported in text. At each time-point, the model was thus:
Next, we assessed whether the CPP differentiated between objective task performance under different levels of speed pressure by including fixed effects for response accuracy, Deadline, and their interaction. We also included fixed effects for Coherence, Offset, and Criterion to account for variance in CPP amplitude due to the stimulus manipulations, and by-participant random intercepts. At each time-point, the model was thus:
Finally, to determine whether the strength of the relationship between the CPP and participants’ metacognitive judgements for each Deadline condition, we modelled the interaction between Deadline, response accuracy, and CPP amplitude. CPP amplitude was z-scored per participant per time sample prior to being entered as a predictor. As with the response accuracy analysis, we included fixed effects of Coherence, Offset, and Criterion and by-participant random intercepts. At each time point, the model was thus:
All EEG analyses were run across both stimulus- and response-locked epochs and FDR corrections73 were applied across time points to control for multiple comparisons.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
All experimental data are available at https://osf.io/cgxw6/76.
Code availability
All code for experimental procedures and statistical analysis are available at https://osf.io/cgxw6/76.
References
Flavell, J. H. Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. Am. Psychol. 34, 906–911 (1979).
Nelson, T. O. & Narens, L. Metamemory: a theoretical framework and new findings. Psychol. Learn. Motiv. Adv. Res. Theory 26, 125–173 (1990).
Fleming, S. M. Metacognition and confidence: a review and synthesis. Annu. Rev. Psychol. 75, 241–268 (2023).
Guggenmos, M., Wilbertz, G., Hebart, M. N. & Sterzer, P. Mesolimbic confidence signals guide perceptual learning in the absence of external feedback. Elife 5, e13388 (2016).
Carlebach, N. & Yeung, N. Flexible use of confidence to guide advice requests. Cognition 230, 105264 (2023).
Desender, K., Boldt, A. & Yeung, N. Subjective confidence predicts information seeking in decision making. Psychol. Sci. 29, 761–778 (2018).
Pescetelli, N., Hauperich, A. & Yeung, N. Confidence, advice seeking and changes of mind in decision making. Cognition 215, 104810 (2021).
Desender, K., Boldt, A., Verguts, T. & Donner, T. H. Confidence predicts speed-accuracy tradeoff for subsequent decisions. Elife 8, 1–25 (2019).
Fleming, S. M. & Lau, H. C. How to measure metacognition. Front. Hum. Neurosci. 8, 1–9 (2014).
Stone, C., Mattingley, J. B. & Rangelov, D. On second thoughts: changes of mind in decision-making. Trends Cogn. Sci. 26, 419–431 (2022).
Baranski, J. V. & Petrusic, W. M. Probing the locus of confidence judgments: experiments on the time to determine confidence. J. Exp. Psychol. Hum. Percept. Perform. 24, 929–945 (1998).
Lee, D. G., Daunizeau, J. & Pezzulo, G. Evidence or confidence: What is really monitored during a decision? Psychon. Bull. Rev. 30, 1360–1379 (2023).
Baranski, J. V. & Petrusic, W. M. The calibration and resolution of confidence in perceptual judgments. Percept. Psychophys. 55, 412–428 (1994).
Moran, R., Teodorescu, A. R. & Usher, M. Post choice information integration as a causal determinant of confidence: Novel data and a computational account. Cogn. Psychol. 78, 99–147 (2015).
Pleskac, T. J. & Busemeyer, J. R. Two-stage dynamic signal detection: A theory of choice, decision time, and confidence. Psychol. Rev. 117, 864–901 (2010).
Ratcliff, R., Smith, P. L., Brown, S. D. & McKoon, G. Diffusion decision model: current issues and history. Trends Cogn. Sci. 20, 260–281 (2016).
Wispinski, N. J., Gallivan, J. P. & Chapman, C. S. Models, movements, and minds: bridging the gap between decision making and action. Ann. N. Y. Acad. Sci. 1464, 30–51 (2018).
Vickers, D. & Packer, J. Effects of alternating set for speed or accuracy on response time, accuracy and confidence in a unidimensional discrimination task. Acta Psychol. (Amst). 50, 179–197 (1982).
Vickers, D. Decision processes in visual perception. (Academic Press, 1979).
Resulaj, A., Kiani, R., Wolpert, D. M. & Shadlen, M. N. Changes of mind in decision-making. Nature 461, 263–266 (2009).
Burk, D., Ingram, J. N., Franklin, D. W., Shadlen, M. N. & Wolpert, D. M. Motor effort alters changes of mind in sensorimotor decision making. PLoS One 9, e92681 (2014).
Boldt, A. & Yeung, N. Shared neural markers of decision confidence and error detection. J. Neurosci. 35, 3478–3484 (2015).
Atiya, N. et al. Changes-of-mind in the absence of new post-decision evidence. PLoS Comput. Biol. 16, e1007149 (2020).
Evans, N. J., Dutilh, G., Wagenmakers, E. J. & van der Maas, H. L. J. Double responding: A new constraint for models of speeded decision making. Cogn. Psychol. 121, 101292 (2020).
Albantakis, L., Branzi, F. M., Costa, A. & Deco, G. A multiple-choice task with changes of mind. PLoS One 7, e43131 (2012).
Turner, W., Feuerriegel, D., Hester, R. & Bode, S. An initial ‘snapshot’ of sensory information biases the likelihood and speed of subsequent changes of mind. PLoS Comput. Biol. 18, 1–16 (2022).
Stone, C., Mattingley, J. B., Bode, S. & Rangelov, D. Distinct neural markers of evidence accumulation index metacognitive processing before and after simple visual decisions. Cereb. Cortex 34, bhae179 (2024).
Murphy, P. R., Robertson, I. H., Harty, S. & O’Connell, R. G. Neural evidence accumulation persists after choice to inform metacognitive judgments. Elife 4, e11946 (2015).
Desender, K., Vermeylen, L. & Verguts, T. Dynamic influences on static measures of metacognition. Nat. Commun. 13, 1–12 (2022). 2022 131.
Kelly, S. P. & O’Connell, R. G. Internal and external influences on the rate of sensory evidence accumulation in the human brain. J. Neurosci. 33, 19434–19441 (2013).
Grogan, J. P., Rys, W., Kelly, S. P. & O’Connell, R. G. Confidence is predicted by pre- and post-choice decision signal dynamics. Imaging Neurosci. 1, 1–23 (2023).
Tagliabue, C. F. et al. The EEG signature of sensory evidence accumulation during decision formation closely tracks subjective perceptual experience. Sci. Rep. 9, 1–12 (2019).
Herding, J., Ludwig, S., von Lautz, A., Spitzer, B. & Blankenburg, F. Centro-parietal EEG potentials index subjective evidence and confidence during perceptual decision making. Neuroimage 201, 116011 (2019).
Rausch, M., Zehetleitner, M., Steinhauser, M. & Maier, M. E. Cognitive modelling reveals distinct electrophysiological markers of decision confidence and error monitoring. Neuroimage 218, 116963 (2020).
Steinhauser, M. & Yeung, N. Error awareness as evidence accumulation: Effects of speed-accuracy trade-off on error signaling. Front. Hum. Neurosci. 6, 1–12 (2012).
van Vugt, M. K., Beulen, M. A. & Taatgen, N. A. Relation between centro-parietal positivity and diffusion model parameters in both perceptual and memory-based decision making. Brain Res. 1715, 1–12 (2019).
O’Connell, R. G., Dockree, P. M. & Kelly, S. P. A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nat. Neurosci. 15, 1729–1735 (2012).
Twomey, D. M., Murphy, P. R., Kelly, S. P. & O’Connell, R. G. The classic P300 encodes a build-to-threshold decision variable. Eur. J. Neurosci. 42, 1636–1643 (2015).
Desender, K., Ridderinkhof, K. R. & Murphy, P. R. Understanding neural signals of post-decisional performance monitoring: An integrative review. Elife 10, e67556 (2021).
Heitz, R. P. The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Front. Neurosci. 8, 1–19 (2014).
Scheffers, M. K. & Coles, M. G. H. Performance monitoring in a confusing world: Error-related brain activity, judgments of response accuracy, and types of errors. J. Exp. Psychol. Hum. Percept. Perform. 26, 141–151 (2000).
Bernstein, P. S., Scheffers, M. K. & Coles, M. G. H. ‘Where Did I Go Wrong?’ A psychophysiological analysis of error detection. J. Exp. Psychol. Hum. Percept. Perform. 21, 1312–1322 (1995).
Botvinick, M. M., Carter, C. S., Braver, T. S., Barch, D. M. & Cohen, J. D. Conflict monitoring and cognitive control. Psychol. Rev. 108, 624–652 (2001).
Cavanagh, J. F., Cohen, M. X. & Allen, J. J. B. Prelude to and resolution of an error: EEG phase synchrony reveals cognitive control dynamics during action monitoring. J. Neurosci. 29, 98–105 (2009).
Wendelken, C., Ditterich, J., Bunge, S. A. & Carter, C. S. Stimulus and response conflict processing during perceptual decision making. Cogn. Affect. Behav. Neurosci. 9, 434–447 (2009).
Rangelov, D. & Mattingley, J. B. Evidence accumulation during perceptual decision-making is sensitive to the dynamics of attentional selection. Neuroimage 220, 117093 (2020).
Feuerriegel, D. et al. Electrophysiological correlates of confidence differ across correct and erroneous perceptual decisions. Neuroimage 259, 1–15 (2022).
Barca, L. & Pezzulo, G. Unfolding visual lexical decision in time. PLoS One 7, e35932 (2012).
Sakai, K. Task set and prefrontal cortex. Annu. Rev. Neurosci. 31, 219–245 (2008).
Twomey, D. M., Kelly, S. P. & O’Connell, R. G. Abstract and effector-selective decision signals exhibit qualitatively distinct dynamics before delayed perceptual reports. J. Neurosci. 36, 7346–7352 (2016).
Loughnane, G. M. et al. Target selection signals influence perceptual decisions by modulating the onset and rate of evidence accumulation. Curr. Biol. 26, 496–502 (2016).
Botelho, C. et al. Uncertainty deconstructed: conceptual analysis and state-of-the-art review of the ERP correlates of risk and ambiguity in decision-making. Cogn. Affect. Behav. Neurosci. 23, 522–542 (2023).
Polich, J. Updating P300: an integrative theory of P3a and P3b. Clin. Neurophysiol. 118, 2128–2148 (2007).
Nieuwenhuis, S., Aston-Jones, G. & Cohen, J. D. Decision making, the P3, and the locus coeruleus-norepinephrine system. Psychol. Bull. 131, 510–532 (2005).
Donner, T. H., Siegel, M., Fries, P. & Engel, A. K. Buildup of choice-predictive activity in human motor cortex during perceptual decision making. Curr. Biol. 19, 1581–1585 (2009).
Balsdon, T., Verdonck, S., Loossens, T. & Philiastides, M. G. Secondary motor integration as a final arbiter in sensorimotor decision-making. PLOS Biol 21, e3002200 (2023).
Dendauw, E. et al. The gated cascade diffusion model: An integrated theory of decision making, motor preparation, and motor execution. Psychol. Rev. 131, 825–857 (2024).
Verdonck, S., Loossens, T. & Philiastides, M. G. The leaky integrating threshold and its impact on evidence accumulation models of choice response time (RT). Psychol. Rev. https://doi.org/10.1037/REV0000258 (2020).
Filimon, F., Philiastides, M. G., Nelson, J. D., Kloosterman, N. A. & Heekeren, H. R. How embodied is perceptual decision making? Evidence for separate processing of perceptual and motor decisions. J. Neurosci. 33, 2121 (2013).
Yau, Y. et al. Evidence and urgency related EEG signals during dynamic decision-making in humans. J. Neurosci. 41, 5711–5722 (2021).
Steinemann, N. A., O’Connell, R. G. & Kelly, S. P. Decisions are expedited through multiple neural adjustments spanning the sensorimotor hierarchy. Nat. Commun. 9, 1–13 (2018).
Murphy, P. R., Boonstra, E. & Nieuwenhuis, S. Global gain modulation generates time-dependent urgency during perceptual choice in humans. Nat. Commun. 7, 1–15 (2016).
Katsimpokis, D., Hawkins, G. E. & van Maanen, L. Not all speed-accuracy trade-off manipulations have the same psychological effect. Comput. Brain Behav. 3, 252–268 (2020).
Steinhauser, M. & Yeung, N. Decision processes in human performance monitoring. J. Neurosci. 30, 15643–15653 (2010).
Navarro-Cebrian, A., Knight, R. T. & Kayser, A. S. Frontal monitoring and parietal evidence: mechanisms of error correction. J. Cogn. Neurosci. 28, 1166–1177 (2016).
Arbel, Y. & Donchin, E. Parsing the componential structure of post-error ERPs: a principal component analysis of ERPs following errors. Psychophysiology 46, 1179–1189 (2009).
Gehring, W. J., Goss, B., Coles, M. G. H., Meyer, D. E. & Donchin, E. A Neural System for Error Detection and Compensation 4, 385–390 (1993).
Hewig, J., Coles, M. G. H., Trippe, R. H., Hecht, H. & Miltner, W. H. R. Dissociation of Pe and ERN/Ne in the conscious recognition of an error. Psychophysiology 48, 1390–1396 (2011).
Voodla, A. & Uusberg, A. Do performance-monitoring related cortical potentials mediate fluency and difficulty effects on decision confidence? Neuropsychologia 155, 107822 (2021).
Shynkaruk, J. M. & Thompson, V. A. Confidence and accuracy in deductive reasoning. Mem. Cogn. 34, 619–632 (2006).
Peirce, J. W. PsychoPy—Psychophysics software in Python. J. Neurosci. Methods 162, 8–13 (2007).
Bang, D. & Fleming, S. M. Distinct encoding of decision confidence in human medial prefrontal cortex. Proc. Natl. Acad. Sci. USA 115, 6082–6087 (2018).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B 57, 289–300 (1995).
Gramfort, A. et al. MEG and EEG data analysis with MNE-Python. Front. Neurosci. 7, 1–13 (2013).
Nolan, H., Whelan, R. & Reilly, R. B. FASTER: Fully automated statistical thresholding for EEG artifact rejection. J. Neurosci. Methods 192, 152–162 (2010).
Stone, C., Mattingley, J. B. & Rangelov, D. Neural mechanisms of metacognitive improvement under speed pressure [Data set]. OSF https://doi.org/10.17605/OSF.IO/CGXW6 (2025).
Morey, R. D. Confidence Intervals from Normalized Data: A correction to Cousineau (2005). Tutor. Quant. Methods Psychol. 4, 61–64 (2008).
Acknowledgements
This work was supported by a grant from the Australian Research Network for Undersea Decision Superiority (RN-UDS) to D.R. and J.B.M.
Author information
Authors and Affiliations
Contributions
C.S.—Data Curation, Formal Analysis, Investigation, Methodology, Project Administration, Software, Visualisation, and Writing (original draft); J.B.M.—Conceptualisation, Funding Acquisition, Methodology, Resources, Supervision, and Writing (reviewing and editing); D.R.—Conceptualisation, Funding Acquisition, Methodology, Software, Supervision, and Writing (reviewing and editing).
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Communications Biology thanks Matthias Guggenmos and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Jasmine Pan. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Stone, C., Mattingley, J.B. & Rangelov, D. Neural mechanisms of metacognitive improvement under speed pressure. Commun Biol 8, 223 (2025). https://doi.org/10.1038/s42003-025-07646-3
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s42003-025-07646-3







