Abstract
In racket sports like badminton, accurately predicting shot timing and spatial positioning allows athletes to better interpret opponents’ intentions and respond quickly. Perceptual-cognitive training focused on visual cues can enhance these anticipatory skills. However, anticipation based on both visual and auditory information are generally more accurate than those relying solely on visual cues, suggesting the need to explore the benefits of integrated audio-visual Perceptual-cognitive training. This study investigated the effects of different perceptual-cognitive training protocols on anticipation performance in badminton novices. Participants were divided into four groups: a visual training group (receiving only visual cues during training), an audio-visual training group (receiving both visual and auditory cues), a audio-visual blurred training group (receiving degraded visual and auditory cues), and a control group (watching match videos without specific training) to underwent six training sessions over two weeks. Anticipation performance was assessed using a computer-based task, a high-cognitive-load task (requiring simultaneous digit discrimination), a simulated motor task (involving physical movement to respond), and a real-competition video task. Results showed: (1) Training groups improved in anticipation accuracy, with no such improvement in the control group; (2) Under high cognitive load and simulated motor tasks, the visual training group improved the most, followed by the visual-auditory group, and then the blurred training group. The visual training group also performed better in visual-auditory conditions compared to visual conditions; (3) Improvements were sustained for two weeks. This accuracy improvement is likely because the training protocols, particularly visual training, facilitated action representation by integrating visual and auditory cues based on Event Coding Theory, and guided discovery feedback optimized participants’ information search towards critical anticipatory cues. The study concludes that presenting visual cues during the anticipation phase, combined with guided discovery using audio-visual feedback during the feedback phase, optimize skill acquisition and retention. This approach may have broader applications in other sports or training environments where rapid decision-making and predictive accuracy are essential.
Similar content being viewed by others
Introduction
Motor anticipation refers to the perceptual process by which athletes predict the future spatiotemporal position of an object based on incomplete prior information1,2, such as the flight path of a badminton shuttlecock or the timing and position of a punch in boxing. In competitive sports, the ability of elite athletes to make effective psychological and behavioral responses is attributed not only to their physical and technical prowess but also to their advanced cognitive skills—specifically, their ability to anticipate using informational cues and prepare responses in advance3.
Early research focused on the visual modality, suggesting that high-quality motor anticipation is driven by the effective acquisition and integration of situational and kinematic cues1,4. However, athletes also utilize auditory cues in their anticipatory processes, such as the sound of a ball hitting a racket. Studies have shown that auditory information can be more influential than visual information in judging the speed and force of a strike5,6,7. When predicting the length of volleyball serves, the accuracy was significantly higher on the basis of auditory information than on the basis of visual information7. Andy Murray, one of the world’s top male tennis players, remarked on a deaf player’s victory in an ATP tour match, saying, “If I played with headphones on, it would be hard to pick up the ball’s speed and the racket’s spin; we often rely on our ears to perceive these things” (ATP Tour website, 2019). This underscores the idea that, in addition to visual cues, auditory information during movement provides critical insights into speed, direction, distance, and force, all of which are essential for motor anticipation. Researches on multi-sensory integration have demonstrated that anticipation performance under combined audio-visual conditions is consistently superior to performance under single-sensory conditions8,9. When predicting the rotational motion of the tabletennis service, the ability accurately increased with the addition of auditory information to visual information but not with additional visual information alone8.
Research on perceptual-cognitive skill training has shown that video-based cognitive training can enhance athletes’ anticipatory performance10,11,12. Previous study have largely concentrated on various aspects of training, including the practice order (block or random)12,13, video presentation techniques (2D vs. 3D video)14, and the presentation of key cues (explicit instruction, guided discovery, implicit learning)3,15. Notably, most of this cognitive training has focused on visual information, with little research exploring the effects of multi-sensory (e.g., audio-visual) perceptual-cognitive training on anticipatory performance.
Given the observed benefits and theoretical underpinnings of audio-visual information in enhancing landing-point anticipation, this study hypothesizes that cognitive training involving audio-visual dual-channel information may yield superior intervention outcomes. According to Event Coding Theory16, the process of action perception activates multi-sensory modality encodings related to action characteristics. Event Coding Theory (ECT) proposes that perceived events and actions are represented in a common cognitive code. This common coding system suggests that when we observe an action, the same neural representations are activated as when we perform or imagine performing that same action. In the context of anticipation in sports, this means that observing an opponent’s movements, including the associated sounds, activates a representation of that action in the observer’s brain. This representation integrates various sensory modalities, including visual and auditory information, into a unified event code. Therefore, when athletes receive both visual and auditory information during anticipation, their brains can create a more complete and robust representation of the opponent’s actions, leading to more accurate predictions.Compared to training that relies solely on visual information, using audio-visual information in training enables participants to encode multi-sensory information pertinent to action processing, thereby laying the foundation for improved anticipatory performance in the complex, dynamic environments typical of real-world sports. Additionally, auditory information, often underemphasized in regular training17, may draw greater attention when presented as part of training materials, encouraging participants to more appropriately weight and integrate visual and auditory cues. As a result, training based on audio-visual information may enhance the ability to integrate multi-sensory information.
Regarding training sequence design, research has shown that guided discovery and random training produce better transfer and retention effects12,15. Random practice sequences, while causing interference between ongoing tasks and leading to short-term forgetting, require participants to reconstruct action representations for new tasks, which results in more robust and memorable representations. Additionally, random practice enhances memorability by promoting more comparative analysis between tasks18. Guided discovery feedback training focuses learners’ attention on specific informational areas through instructions or visual cues, leading to memory benefits as learners exert extra effort during the learning process15. In perceptual-cognitive skill learning, this is considered a refined form of discovery learning19. Consequently, this study will employ randomized training, distributing different landing-point striking techniques evenly across sessions. During the feedback phase, critical posture cues for anticipation will be marked to guide participants in utilizing key anticipatory information20,21.
The use of blurred audio-visual information may enhance cross-channel information processing. Studies on the effects of high and low spatial frequency information (blurred vs. highlighted surface information) on anticipating deceptive movements in badminton suggest that low spatial frequency training yields better retention outcomes, likely because it directs participants’ attention to the rough kinematic information (low spatial frequency) of actual movements rather than focusing on deceptive, non-specific cues11. Additionally, research indicates that when information is presented across multiple channels, cross-channel processing is more likely to occur when information from one channel is insufficient22. A blurred training group was designed in this study was motivated by these findings, aiming to explore whether training under conditions of reduced visual clarity could enhance anticipation performance by promoting greater reliance on low spatial frequency information and potentially improving the integration of auditory and visual cues. The reduction of auditory volume to 80% in this group was intended to parallel the reduced clarity of the visual input, creating a condition where both primary sensory channels were partially compromised.
Research on perceptual-cognitive skill training suggests that continuous training, conducted three times in total with each session lasting 15–30 min, can significantly improve athletes’ anticipatory abilities3,11,12. Other studies indicate that training one to three times per week, with each session lasting 15–30 min over approximately four to six weeks, can also enhance anticipitory abilities23,24. Based on these findings, we chose a training duration of two weeks, with three sessions per week, each lasting 20–25 min. This duration was considered sufficient to induce improvements in anticipation ability while remaining practical and feasible for the participants.
In summary, this study aims to employ computer-based audio-visual materials (incorporating both visual and auditory information) for perceptual-cognitive training in badminton. The experiment will include one control group and three training groups (visual training, audio-visual training, and audio-visual blurred training), with landing-point anticipation tests conducted before, during, and after training. The objective is to investigate the impact of perceptual-cognitive training based on audio-visual information on anticipation performance under audio-visual and visual conditions in computerized tasks, as well as the transfer and retention effects of this training in other contexts. Considering the ecological validity of the training effects, motor simulation observed on a large screen will be used in the first transfer experiment. Given that athletes typically perform landing-point anticipation in complex real-world scenarios, a high-attention-load computer task will be employed as the second transfer task. Finally, considering the authenticity and diversity of applied contexts, the third transfer task will involve predicting the landing areas of elite athletes in actual matches. Fourteen days after the intervention training concludes, each task will be retested to examine the retention effects of cognitive training.
Methods
Participants
Since G*Power cannot calculate the required sample size for experiments involving the interaction effects of two within-subject factors, this study referred to the design of similar research. For instance, Fazel et al.25 recruited 49 participants and divided them into 4 groups, each containing approximately 12 participants; Pagé et al.26 recruited 27 participants, dividing them into 3 groups, each with 9 participants. In the present study, 42 university students majoring in physical education, with 1 to 2 years of badminton training experience, were recruited (38 males, 4 females), with an average age of 21.67 ± 1.00 years. Participants were assigned to four intervention groups: 10 participants in the control group, 11 in the visual training group, 10 in the audio-visual training group, and 11 in the audio-visual blurred training group. All participants signed a written informed consent form before the experiment. The sample size in this study is comparable to those used in similar studies. The experiment was conducted in accordance with the ethical standards established in the 1964 Declaration of Helsinki and its 2013 revision, and was approved by the Beijing Sport University ethics committee (D20220307-1).
Experimental design
This study employed a 4 (Training group: visual training group, audio-visual training group, audio-visual blurred training group, control group) × 4 (Test Time: pre-test, mid-test, post-test, retention test) × 2 (Information Type: audio-visual information, visual information) mixed experimental design. The mid-test was incorporated to evaluate the progression of learning and to gauge the efficacy of the training at an intermediate stage. This assessment was conducted after one week of intervention, providing a checkpoint to monitor the early effects of each training protocol. Participants were divided into four groups in a pseudo-random manner based on their average pre-test scores, ranging from highest to lowest to ensure initial comparability between groups. This approach was chosen to control for individual differences in baseline performance, which could otherwise confound the results.
Pre-test, mid-test, post-test, and retention test were conducted before the intervention, one week after the intervention began, after two weeks of intervention, and two weeks after the intervention ended, respectively. Specifically, the mid-test was administered within three days following the third training session, while the post-test was conducted within three days after the completion of the sixth training session. The retention test was conducted fourteen days after the post-test. Figure 1 provides a detailed timeline of the study, including the intervention and testing schedule. In the audio-visual information condition, the prediction task presented both visual and auditory information channels, while in the visual information condition, only visual information was presented.
The dependent variables were prediction accuracy and reaction time. Accuracy was calculated as the ratio of the correct landing point choices to the total number of trials, and reaction time was the duration from the end of the video presentation to the participant’s response. The testing tasks included computer-based tasks, high cognitive load computer tasks, motor simulation tasks in front of a large screen, and real competition scenario video tasks.
Experimental materials and equipment
Stimuli video recording
The stimuli were recorded in an indoor badminton court with no adjacent courts. The recording was done using a Nikon Z50 at a frame rate of 60 frames per second and a resolution of 1920 × 1080.The camera was mounted on a tripod at a height of 1.60 m, positioned approximately 1 m behind the front service line, directly opposite the athlete being recorded11. This setup was designed to replicate the player’s perspective during a match, where the athlete is constantly in a knee-bent position. The camera angle was estimated to be approximately 15–20 degrees from the horizontal plane, with the lens aimed at the athlete’s shoulder. A 23-year-old athlete with 15 years of training and competitive experience and a 21-year-old athlete with 14 years of training and competitive experience, both members of the provincial team with extensive training and competition experience, participated in the video recording. They were instructed to move from the mid-court to the backcourt and perform straight or diagonal clears, drop shots, and smashes. These techniques corresponded to six specific target locations (Figure S1). These three shot types were included to enhance the ecological validity of the study, as they are among the most frequently used shots in badminton and represent a range of trajectories and speeds. Furthermore, different shot types are associated with variations in visual and kinematic cues. By training participants to anticipate shots based on a variety of cues, we aimed to promote more robust learning and better transfer of anticipation skills to novel situations. A total of 240 recordings were made.
Stimulus assessment
First, a skilled badminton player with more than 10 years of experience used Adobe Premiere software to extract complete segments of badminton strokes. During this process, clips with invalid strokes (e.g., shots that went into the net or out of bounds) and videos with interference (e.g., other individuals appearing in the footage) were excluded, along with any audio files containing external voices or noise. Next, two experienced high-level badminton players—a 24-year-old with 15 years of training experience and a 29-year-old with 12 years of training experience—evaluated the technical precision of the movements and the plausibility of the shot placements in the experimental materials. In the end, 30 videos were selected for testing, while an additional 144 videos were set aside for cognitive training. Both the testing and training videos were evenly allocated.
Stimulus editing
The key moment identified for stimulus editing is the ball-racket contact. From this point, a video segment is extracted, consisting of 1000 ms of footage leading up to the contact and 67 ms (4 frames) following it, resulting in a total duration of 1067 ms. During this period, the shuttlecock’s post-contact flight trajectory is deliberately removed. The shuttlecock’s flight information is omitted to prevent it from revealing the landing area, allowing for a more accurate assessment of how auditory cues at the moment of shuttlecock-racket contact influence landing area predictions. Two experimental conditions are established by manipulating the combination of audio and visual elements: the visual information condition, which presents a 1067 ms video depicting the ball strike without sound; and the audio-visual information condition, which presents the same 1067 ms video but includes the sound of the shuttlecock strike synchronized with the moment of contact at 1000 ms. The manipulate audio and video duration refer to previous research27.
Cognitive training materials
The stimuli for the training interventions consist of shuttlecock-hitting videos from four high-level athletes, two of whom are the same athletes in the test materials. The other two athletes are 25 years old with 17 years of training experience, and 26 years old with 10 years of training experience. The methods for recording and selecting these videos were identical to those used for the test materials. A total of 288 training stimulus segments were selected, with six evenly distributed landing areas. Each video was shown twice at different training sessions.
Equipment
The intervention training and all cognitive testing tasks were presented on an HP laptop equipped with an I5-7300HQ processor (3.25 GHz), 16 GB of RAM, featuring a 15.6-inch display with 1080p resolution. Motor simulation tasks were displayed on a large screen (1.975 m wide × 1.480 m high) using an EPSON projector (CH-TW610, 1080p). All experimental tasks were presented using E-prime 2.0 software, with participants’ reaction times and response accuracy recorded.
Test tasks
Computer task
In a single trial, a fixation point is presented for 500 ms, followed by a video (or audio-video) of an athlete striking a ball, lasting approximately 1067 ms. After the video concludes, the response screen appears, and participants are asked to anticipate the landing point of the ball and press a button (Fig. S2). The numbers on the numeric keypad of a full-sized computer keyboard correspond to the respective badminton landing areas (Fig. S1). Both the audio-visual and visual conditions include 30 trials, with a 30-second break every 15 trials.
High cognitive load computer tasks
To manipulate cognitive load, drawing on previous research27, a digit transformation task (more-odd shifting) was introduced (Fig. 2). In each trial, a fixation point is presented for 500 ms, followed by a black or green digit from 1 to 9 (excluding 5) displayed for 500 ms. This is followed by a video of a badminton shot (or an audio-video clip) lasting approximately 1067 ms. After the video ends, participants first judge the landing area of the shot and press the corresponding key as quickly and accurately as possible. They then assess the digit’s magnitude (if the digit is black) or its parity (if the digit is green) and press the appropriate key (‘F’ for large numbers or odd numbers, and ‘J’ for small numbers or even numbers; or the reverse, with key assignments balanced across participants). Both the audio-visual and visual conditions include 30 trials, with a 30-second rest after every 15 trials.
Schematic diagram of a single trial in high cognitive load computer tasks. Each trial begins with a fixation cross, followed by numerical figures shown in black or green font. Stroke clips are then presented in video or audio-video format. Finally, participants predict the badminton landing areas and then judge whether the figures are greater or less than 5 (if the digit is black), or parity of the figures (if the digit is green).
Motor simulation task
This task is identical to the computer task, differing only in the mode of presentation and response. The experimental task was projected onto a screen (1.975 m × 1.480 m). Participants viewed the video from approximately 2 m from the screen, holding a racket in a ready stance to simulate the real perspective during a match. After predicting the shuttlecock’s landing area, participants moved from the ready position to the predicted location and responded by using the racket to touch the designated marked bucket (Fig. S3). The six buckets were each positioned about 1.5 m from the ready position. A camera (resolution 1920 × 1080, frame rate 60 fps) was positioned behind the participants to record their response actions, and their reaction times and accuracy were later analyzed through video playback. Both the audio-visual and visual information conditions included 30 trials, with a 30-second rest period after every 15 trials.
Real competition scenario video tasks
Using Adobe Premiere Pro to extract footage from the 2018 World Championships match of Kento Momota and Shi Yuqi. The video and audio durations were matched to those used in the computer tasks. The trajectory of the shuttlecock is hidden after it makes contact with the racket. The procedure for each trial mirrored that of the computer tasks. Both the audio-visual and visual-only conditions included 30 trials, with a 30-second rest after every 15 trials.
Training protocol
The training protocol consists of two phases per trial: anticipation and feedback (Fig. 3). In the anticipation phase, participants were asked to predict the landing area of a shuttlecock in a video and pressed a key to make their decision, mirroring a computer-based task. The clip included 1000ms before shuttlecock and racket contact and 67 ms after contact. The key variations among the training protocols occur during the anticipation phase (Table 1). In this phase, the visual training group is shown only the shuttlecock-hitting video without sound (Fig. S4a); the audio-visual training group receives task videos with shuttlecock-racket contact sound; and the audio-visual blurred training group views a blurred video with sound reduced to 80% of normal volume (Fig. S4b). The feedback phase then provided a replay of the shuttlecock-hitting action, accompanied by audio-visual feedback. The feedback phase is consistent across all groups. The feedback video starts 1000 ms before the shuttlecock-racket contact, continues for 500 ms after the contact, and then pauses for 1500 ms, resulting in a total duration of 3000 ms. During this period, the audio starts at the moment of contact and lasts for 500 ms. In the first 1500 ms, a red rectangular frame highlights the athlete’s trunk, head, upper limbs, and racket (Fig. S5a), directing participants’ attention to these areas to help them identify key cues for predicting the shuttlecock’s landing. During the 1500 ms pause, the landing position is displayed in the upper right corner of the screen (Fig. S5b). Each training session includes 96 video clips and lasts approximately 20–25 min. The intervention spans two weeks, with three sessions conducted per week.
For the control group, participants watched pre-recorded match videos of the same duration as those in the experimental group. The videos featured the top eight players from the 2018 World Championships, with each scoring rally segmented into individual trials using Adobe Premiere. After viewing each video, participants were asked to recall and identify the final stroke technique used by the athlete on the top side of the screen, choosing from the following options: clear (1), drop shot (2), smash (3), or others (4). This task was intended to ensure that the participants maintained a level of attention comparable to that of other training groups. Importantly, the videos shown to the control group included the entire movement of the athletes, including the follow-through after hitting the shuttlecock. Therefore, there was no need for the control group to anticipate the landing point, as they could directly observe the full trajectory of the shot. Each training session included 60 video clips, with each trial lasting approximately 4 to 22 s.
After each test session (pre-test, mid-test, post-test, and retention test), all participants were required to complete two questionnaires: one assessing their familiarity with the experimental materials, and the other evaluating their use of information in predicting the shuttlecock’s landing area. They were used to assess whether potential differences in familiarity with the test materials could have influenced the results and assist in explaining why anticipation was facilitated. The questionnaires assessed their familiarity with athletes in the test videos (likert scale 1–7) and their self-reported use of visual, auditory, and audio-visual cues (scores recorded as percentages).
Experiment procedure
The entire intervention and testing schedule is depicted in Fig. 1.
-
1.
Preparation Phase: Upon arrival at the laboratory, participants were briefed on the study’s procedures before signing an informed consent form. The consent process ensured that participants were fully aware of their rights, including the ability to withdraw from the study at any time without penalty. They then completed a demographic information survey.
-
2.
Pre-test: Participants underwent a series of pre-tests, including computer tasks, high cognitive load computer tasks, motor simulation tasks in front of a large screen, and real competition scenario video tasks. The sequence of tasks was counterbalanced using a Latin square design to minimize order effects. After completing the tests, participants filled out a familiarity questionnaire and a scale assessing their use of information. Participants were then assigned to groups based on their pre-test performance.
-
3.
Cognitive Training (Sessions 1–3): During the following week, each group undergoes three cognitive training sessions, spaced 1 to 3 days apart. Each session lasts approximately 20 to 25 min.
-
4.
Mid-test: Within three days following the third training session, participants complete the mid-test, which replicates the tasks and procedures of the pre-test.
-
5.
Cognitive Training (Sessions 4–6): In the subsequent week, participants complete three additional cognitive training sessions (Sessions 4–6) following the same procedures and schedule as in Sessions 1–3.
-
6.
Post-test: Within three days after the completion of the sixth training session, participants undergo the post-test, which mirrors the content and procedures of the pre-test.
-
7.
Retention Test: Following the post-test, participants enter a skill retention phase. Fourteen days later, they complete a retention test, identical in content and procedure to the pre-test.
Data collection and analysis
The accuracy and reaction time for landing area anticipation were analyzed using a repeated-measures ANOVA, which included three independent factors: training group (visual, audio-visual, blurred training and control groups), test time (pre-test, mid-test, post-test, retention test), and information type (audio-visual, visual). The statistical analysis was performed using SPSS 19.0. When the assumption of sphericity was violated, corrections to the degrees of freedom and p-values were applied using the Greenhouse-Geisser method. Post-hoc comparisons were conducted with the Bonferroni correction, and effect sizes were reported as partial eta squared (ηp2) or Cohen’s d28. Results were considered significant at p < 0.050.
Kingsley et al.29 highlighted that in interaction analysis, even when the interaction coefficients are statistically significant, the presence of non-significant simple effects (or marginal effects) under certain conditions can lead to an overinterpretation of the findings. Conversely, if the interaction coefficients are not significant while certain simple effects are significant, the results may be underestimated. This concern is especially pertinent in multi-factor designs, where researchers often overlook the significance of non-zero marginal effects. This observation suggests that independent variables may significantly influence the dependent variable, even when main effects or interactions are not statistically significant. Supporting this approach, Galli et al.30 and Wang et al.31 utilized similar methodologies and demonstrated that simple effects analysis can enhance the understanding of subtle variations in interactions. Consequently, this study employed simple effects analysis to offer a more nuanced examination of the data, regardless of the significance of interactions in the ANOVA. Furthermore, given the large number of variables and the substantial dataset involved, and considering that the main effects of the training groups and testing times are not critical for testing the experimental hypotheses, this paper does not report the multiple comparisons of main effects in detail, focusing instead on the multiple comparisons of interactions.
Results
Effect of cognitive training on computer anticipation task
Anticipation accuracy
A repeated measures ANOVA was conducted with the intervention training group as the between-subject variable, and testing time and information type as within-subject variables. The dependent variable was the accuracy of predicting landing areas in a computer task. The results indicated a significant main effect of information type ( F(1, 38) = 45.91, p < 0.001, ηp2 = 0.547), with accuracy being higher in the audio-visual condition compared to the visual condition. There was also a significant main effect of testing time (F(3, 114) = 9.51, p < 0.001, ηp2 = 0.200) and a significant main effect of the training group (F(3, 38) = 4.00, p = 0.015, ηp2 = 0.239). No significant interactions were found among all two and three factors (ps ≥ 0.091).
Although no significant interactions were found, we conducted a simple simple effect analysis of the three-way interaction to further explore potential differences between groups at specific levels of other factors, given recent methodological discussions highlighting the limitations of relying solely on the significance of interaction terms29,30,31. This approach allows for a more nuanced understanding of the data and can reveal theoretically important patterns, even in the absence of significant interactions. While we acknowledge the significant main effects of information type, test time, and training group, our primary focus is on examining the effects of training interventions under different conditions and across different test times. Therefore, we will not report detailed post-hoc comparisons for all main effects to maintain clarity and focus on our core research questions. The results of the multiple comparisons presented in Fig. 4.
In the control group, neither the audio-visual nor the visual conditions resulted in a significant main effect of test timing (ps ≥ 0.104). The finding suggests that neither the increased familiarity with the testing procedure due to repeated exposure nor the potential placebo effect of video observation was sufficient to enhance test performance.
For the visual training group, the main effect of test time under the audio-visual condition was not significant (F(3, 36) = 1.47, p = 0.239, ηp2 = 0.109). This suggested that visual training did not yield a notable improvement under audio-visual conditions. However, under visual only conditions, a significant main effect of test time was observed (F(3, 36) = 3.57, p = 0.023, ηp2 = 0.229), with accuracy in the retention test (0.45 ± 0.10) significantly higher than in the pre-test (0.36 ± 0.11) (p = 0.026, d = 0.856). These findings suggest that visual training improves performance under visual conditions, though this improvement emerges later and is not observed under audio-visual conditions.
For the audio-visual training group, a significant main effect of test time was found under audio-visual conditions (F(3, 36) = 3.177, p = 0.036, ηp2 = 0.209). Post-test accuracy (0.51 ± 0.07) was significantly higher than pre-test accuracy (0.39 ± 0.08) (p = 0.036, d = 1.596), with a similar trend observed in the retention test (0.49 ± 0.09, p = 0.063, d = 1.174). Under visual conditions, the main effect of test time did not reach statistical significance (F(3, 36) = 2.680, p = 0.061, ηp2 = 0.183), and no significant differences were found across tests. These results indicate that audio-visual training significantly improves performance under audio-visual conditions, but not under visual conditions.
For the group receiving audio-visual blur training, the main effect of test time under audio-visual conditions was not significant (p = 0.074). The retention test (0.49 ± 0.06) showed a significant improvement over the pre-test (0.38 ± 0.10, p = 0.046, d = 1.334). Under visual conditions, the main effect of test time was not significant (p = 0.587). These findings suggest that audio-visual blurred training improves performance under audio-visual conditions, though the effect is delayed, and it does not significantly enhance performance under visual conditions.
Response time
No significant effects were observed for the dependent variable of response time. Detailed results can be found in Supplementary Tables S1–S3.
Effect of cognitive training on high cognitive load computer task
Anticipation accuracy
A repeated measures ANOVA was conducted, with the intervention training group as the between-subjects variable, and test time and information type as within-subjects variables, using prediction accuracy in the high-load task as the dependent variable. The results showed a significant main effect of information type (F(1, 38) = 36.75, p < 0.001, ηp2 = 0.492), with higher accuracy under the audio-visual condition compared to the visual condition. There was also a significant main effect of test time (F(3, 114) = 21.03, p < 0.001, ηp2 = 0.356). However, no significant main effect of group was observed, nor were there any significant interactions among the two-way or three-way factors (ps ≥ 0.090).
A simple simple effect analysis of the three-way interaction was conducted, with the results of the multiple comparisons presented in Fig. 5.
For the control group, there was no significant main effect of testing time under either the audio-visual condition or the visual condition (p ≥ 0.184).
For the visual training group, under the audio-visual condition, there was a significant main effect of testing time (F(3, 36) = 4.69, p = 0.007, ηp2 = 0.281). Accuracy was higher in both the post-test (0.50 ± 0.12, p = 0.005, d = 1.167) and retention test (0.49 ± 0.11, p = 0.012, d = 1.129) compared to the pre-test (0.36 ± 0.12), indicating that the effect of visual training under the audio-visual condition became evident in the post-test and persisted in the retention test conducted two weeks later. Under the visual condition, a significant main effect of testing time was observed (F(3, 36) = 7.57, p < 0.001, ηp2 = 0.387), with higher accuracy in the retention test (0.48 ± 0.10) compared to the pre-test (0.34 ± 0.10) (p < 0.001, d = 1.400), suggesting that the effect of visual training under the visual condition became evident in the retention test.
For the audio-visual training group, a significant main effect of testing time was found under the audio-visual condition (F(3, 36) = 5.89, p = 0.002, ηp2 = 0.329). Accuracy was higher in both the post-test (0.48 ± 0.09) and retention test (0.50 ± 0.08) compared to the pre-test (0.34 ± 0.07) (post-test vs. pre-test p = 0.007, d = 1.736; retention test vs. pre-test p = 0.001, d = 2.219), indicating that the effect of audio-visual training under the audio-visual condition was observed in the post-test and persisted in the retention test conducted two weeks later. Under the visual condition, a significant main effect of testing time was also observed (F(3, 36) = 4.65, p = 0.008, ηp2 = 0.279), with accuracy in the retention test (0.41 ± 0.07) being higher than in the pre-test (0.30 ± 0.10) (p = 0.006, d = 1.274), suggesting that the effect of audio-visual training under the visual condition became evident in the retention test.
For the blurred training group, there was no significant main effect of testing time under the audio-visual condition (p = 0.236). Under the visual condition, the main effect of testing time was not significant (F(3, 36) = 2.59, p = 0.068, ηp2 = 0.178). Accuracy in the retention test (0.41 ± 0.06) was higher than in the pre-test (0.33 ± 0.09, p = 0.058, d = 1.046), indicating that the effect of blurred training under the visual condition became evident in the retention test.
Response time
No significant effects were observed for the dependent variable of response time. Detailed results can be found in Supplementary Tables S4–S6.
Effect of cognitive training on motor simulation task
Anticipation accuracy
A repeated measures ANOVA was conducted with the training group, test time, and information type as independent variables, and the accuracy of anticipation in motor simulation task as the dependent variable. The results indicated a significant main effect of information type (F(1, 38) = 33.48, p < 0.001, ηp2 = 0.468), with accuracy being higher under audio-visual conditions than under visual conditions. There was also a significant main effect of test time (F(2.41, 91.42) = 7.46, p < 0.001, ηp2 = 0.166) and a significant main effect of group (F(3, 38) = 2.96, p = 0.044, ηp2 = 0.190). No significant interactions were observed among the two-way or three-way factors (ps ≥ 0.114).
A simple simple effect analysis of the three-way interaction was conducted, with the results of the multiple comparisons presented in Fig. 6.
For the control group, there were no significant main effects of test time in either the audio-visual condition (p = 0.401) or the visual condition (p = 0.828), with no significant differences in accuracy observed between the pre-test, post-tests, and retention tests. These findings indicate that the increased familiarity with the testing procedure resulting from repeated exposure, as well as the potential placebo effect induced by video observation, were both insufficient to improve test performance.
In the visual training group, the training had a more pronounced effect under the audio-visual condition. Specifically, under the audio-visual condition, there was a significant main effect of test time (F(3, 36) = 3.49, p = 0.025, ηp2 = 0.225). The accuracy in the post-test (0.50 ± 0.07) and retention test (0.48 ± 0.09) were both significantly higher than in the pre-test (0.38 ± 0.11) (post-test vs. pre-test p = 0.022, d = 1.301; retention test vs. pre-test p = 0.031, d = 0.955). Under the visual condition, there was also a significant main effect of test time (F(3, 36) = 4.59, p = 0.008, ηp2 = 0.277), with a significant improvement in retention test performance (0.45 ± 0.13) compared to the pre-test (0.34 ± 0.08) (p = 0.007, d = 1.019), indicating that the effects of visual training under the visual condition became evident in the retention test. The findings indicated that the effects of visual training under the audio-visual condition emerged in the post-test and persisted in the retention test conducted two weeks later, whereas the effects of visual training under the visual condition only became apparent during the retention test.
For the audio-visual training group and the blurred training group, no significant main effects of test time were observed in either the audio-visual condition (ps ≥ 0.151) or the visual condition (p ≥ 0.284).
Response time
No significant effects were observed for the dependent variable of response time. Detailed results can be found in Supplementary Tables S7–S9.
Effect of cognitive training on real competition scenario video tasks
Anticipation accuracy
A repeated measures ANOVA was conducted with intervention training group, testing time, and information type as independent variables, and the accuracy of anticipation in real competition scenarios as the dependent variable. The results indicated a significant main effect of information type (F(1, 38) = 33.46, p < 0.001, ηp2 = 0.468), with higher accuracy observed under audio-visual conditions compared to visual-only conditions. The main effects of testing time and training group, as well as the two-way and three-way interaction, were all non-significant (ps ≥ 0.120).
A simple simple effect analysis of the three-way interaction was conducted, with the results of the multiple comparisons presented in Fig. 7.
For the control group, the main effect of test time under the audio-visual condition approached significance (F(3, 36) = 2.85, p = 0.051, ηp2 = 0.192); however, multiple comparisons indicated no significant differences in accuracy across the tests. Under the visual condition, the main effect of test time was not significant (p = 0.701), with no notable differences in accuracy observed across the test sessions.
In the visual training group, the main effect of test time under the audio-visual condition approached significance (F(3, 36) = 2.70, p = 0.060, ηp2 = 0.184), but multiple comparisons showed no significant differences in accuracy across the tests. Under the visual condition, the main effect of test time was not significant (p = 0.707).
For the audio-visual training group, neither the main effect of test time under the audio-visual condition (p = 0.246) nor the visual condition (p = 0.319) was significant.
In the blurred training group, the main effect of test time under the audio-visual condition was significant (F(3, 36) = 5.42, p = 0.003, ηp2 = 0.311), with retention test (0.42 ± 0.11) accuracy significantly lower than that of the post-test (0.48 ± 0.13, p = 0.002). Under the visual condition, the main effect of test time was not significant (p = 0.659).
Response time
No significant effects were observed for the dependent variable of response time. Detailed results can be found in Supplementary Tables S10–S12.
Discussion
This study designed three types of badminton drop-point prediction training programs by manipulating the presence and clarity of audio and video: audio-visual (audio-video) training, visual (video) training, and blurred (blurred video with 80% volume) training, with a control group that only watched match videos. The aim was to investigate the effects of two weeks of audio-visual cognitive training on anticipation accuracy and to assess the transfer and retention of these effects in high cognitive load tasks, motor simulation tasks, and real match scenarios. The results indicated that none of the training programs significantly affected anticipation response time; however, the following patterns were observed in terms of accuracy improvement:
For the control group, no significant improvement was observed in any of the test tasks. The visual training group showed significant improvement under the visual information condition (retention test) of the computer tasks and also demonstrated improvement under the audio-visual information condition (post-test and retention test) and visual information condition (retention test) in high cognitive load tasks and motor simulation tasks. The audio-visual training group showed improvement under the audio-visual information condition (post-test and retention test) of the computer tasks, and also demonstrated improvement under both the audio-visual information condition (post-test and retention test) and the visual information condition (retention test) in high cognitive load tasks, but no significant improvement was observed in the motor simulation tasks. The blurred training group also showed improvement under the audio-visual information condition (retention test) of the computer tasks and the visual information condition (retention test) of high cognitive load tasks, but no significant effect was observed in the motor simulation tasks. None of the training groups showed significant improvement in the match scenario tasks. In summary, the audio-visual training group showed earlier effects in computer test tasks that were similar to the training tasks, while the visual training group demonstrated broader improvement effects in transfer tasks such as high attention load tasks and sports simulation tasks.
Placebo effect and familiarity effect
The enhancement in prediction performance observed across the training groups can be attributed to the genuine efficacy of the training protocols, rather than being a result of practice effects or placebo effects. First, the control group exhibited no significant improvement in anticipation performance during the mid-test, post-test, and retention test compared to the pre-test, effectively ruling out the influence of placebo effects. This lack of improvement in the control group, which did not receive any anticipation training, suggests that the observed gains in the training groups are likely due to the specific training interventions rather than a general placebo effect—an improvement in performance simply due to the expectation of improvement or the experience of participating in a training program. It is important to note that the control group’s task, which involved identifying the final stroke technique used by athletes in pre-recorded match videos, did not include an anticipatory component. Second, although participants in the training groups became more familiar with the experimental materials throughout the testing process, the level of familiarity (Supplementary Materials) did not significantly differ between training groups, indicating that the observed differences in training effects were not due to varying levels of task familiarity. Additionally, in the high cognitive load computer test task, the accuracy and response time for the sub-task either remained stable or slightly improved during the mid-test, post-test, and retention test compared to the pre-test (refer to supplementary materials Tables S13, S14 for further details). This suggested that the improvements in the primary task (i.e., the prediction task) did not come at the cost of secondary task performance.
Effects of different training protocols on computer test task
The study found that all training groups exhibited improvements in anticipatory performance on the computer test tasks. Notably, the audio-visual training group showed significant improvements in both the post-test and retention test under audio-visual conditions. The visual training group demonstrated improvement in the retention test under visual conditions, while the blurred training group showed improvement in the retention test under audio-visual conditions. These findings suggest that the audio-visual training was the most effective, yielding the earliest improvements in performance on the computer test tasks.
During the intervention training, participants enhanced their ability to utilize visual and auditory information by watching and listening to technical movements related to sound and video during the anticipatory and action feedback phases. According to the event coding theory16,32, perception and action share a common representation system, wherein the action perception process triggered by audio and video stimuli activates coding related to action features. Neuroimaging studies have also demonstrated that performing technical movements, as well as observing or hearing the sounds associated with these movements, can activate brain regions responsible for action processing, such as the supplementary motor area33,34,35. In the intervention training phases, participants repeatedly engaged in cognitive training by watching and listening to technical movements, which likely reinforced the association and memory between the action of hitting the shuttlecock, the sound produced, and the corresponding landing area. This reinforcement enabled participants to more accurately and quickly retrieve the relevant action representations, thereby improving their ability to judge shuttlecock’s landing area.
Furthermore, this study employed guided discovery feedback training to enhance the efficiency of athletes’ information search during the anticipatory process. Previous research indicates that professional athletes primarily rely on the trunk, the racket-holding arm, and the racket as key sources of information when making anticipatory judgments36,37. These athletes typically focus more attention on these critical areas during anticipation. In the feedback videos used in this study, the model athlete’s trunk, racket-holding arm, and racket were emphasized to help participants focus on key anticipatory cues related to the opponent’s stroke. Participants’ subjective reports further confirmed that their ability to utilize visual and auditory information significantly improved after the training intervention.
However, it is important to note that while improvements in accuracy were observed across the training groups, these gains did not translate into faster response times. This observation aligns with the well-established concept of the speed-accuracy trade-off in cognitive psychology38. This principle suggests an inverse relationship between response speed and accuracy in many tasks. In our study, the experimental design prioritized accuracy, as participants were instructed to focus on making correct predictions of the shuttlecock’s landing point. This focus on accuracy, rather than speed, may explain the lack of improvement in response times. Moreover, the duration and intensity of the training intervention may have played a role. While the two-week training program was sufficient to enhance anticipation accuracy, it might not have been long enough to induce changes in motor response speed. The neural mechanisms underlying response time improvements, such as increased myelination of motor pathways, may necessitate more extended or intense training periods to manifest39,40. Additionally, the novice skill level of the participants should be considered. Beginners in badminton might prioritize accuracy over speed as they are still developing fundamental skills. It is plausible that with further training and skill development, improvements in response time would eventually emerge, as observed in studies comparing novice and expert athletes41.
Effects of different training protocols on transfer and retention test tasks
For various transfer test tasks, the visual training group demonstrated a broader range of improvement. In high cognitive load computer test tasks, both the visual training and audio-visual training groups showed improvement under conditions involving audio-visual (post-test and retention test) and visual (retention test) information, while the blur training group only showed improvement under visual (retention test) conditions. In simulated motor tasks, the visual training group improved under both audio-visual (post-test and retention test) and visual (retention test) conditions, whereas neither the audio-visual nor the blur training groups showed improvement. For the real competitive scenario tasks, none of the groups improved their anticipation performance. The improved anticipation performance observed in the post-test across all training groups persisted into the retention test.
The visual training group exhibited better transfer effects compared to the audio-visual and blurred training groups. During the anticipation phase of the visual training, participants relied solely on visual information (such as stroke actions) to make judgments, enhancing their processing of stroke-related information, as has been demonstrated in previous video-based perceptual-cognitive training studies3,12. The visual channel, being one of the primary means of acquiring information in sports contexts42, becomes more efficient when auditory information is unavailable. Notably, the visual training consistently showed earlier intervention effects in audio-visual tasks compared to visual tasks, suggesting that the auditory and audio-visual information processing capabilities of the visual training group were also enhanced. During the feedback phase of the visual training, the replay of complete audio and video (including stroke actions, sounds, and shuttlecock flight paths) with guided discovery-based feedback improved the participants’ efficiency in processing visual information related to stroke actions. Additionally, the inclusion of stroke sounds not only enhanced their processing of visual information43,44 but also developed their ability to utilize auditory information and integrate audio-visual cues. Short-term auditory training has been shown to improve selective auditory attention45. Furthermore, participants’ subjective reports (Supplementary Materials) indicated that the visual training group experienced significant improvements in their perceived ability to utilize visual, auditory, and audio-visual information. These subjective findings provide converging evidence for the effectiveness of the visual training and suggest that the participants were not only improving in their objective performance but also felt more confident in their ability to use different sources of information.
Contrary to the hypothesis, the blurred training group did not outperform the visual and audio-visual training groups in the transfer test tasks. Some studies suggest that experienced athletes perform similarly or even better in anticipation performance under blurry video conditions compared to normal conditions46,47, attributing this to the enhanced motion perception brought about by visual blur. Moreover, video training focused on low spatial frequency (such as blurred action outlines) has been found to better enhance the retention of deceptive movement recognition in badminton players compared to normal video training11,48. This may be because participants spend less time focusing on high spatial frequency information (such as facial details of the opponent) and more time on other key areas (like the contact area between the racket and shuttlecock). Training that focused on utilizing low spatial frequency information might improve the processing of specific visual-motor cues by guiding attention towards the actual outcome of the action. However, in this study, as the participants were beginners in badminton, more detailed processing of stroke-related information might have been more beneficial for their anticipation performance. The blurred visual and auditory information during the anticipation phase of the intervention training failed to help participants establish accurate action schemas or correct associations between stroke actions, sounds, and landing areas. Even with complete information provided during the feedback phase, the participants’ ability to utilize key information was not further strengthened. The direct comparability of these findings to our blurred training group is limited, as previous studies have mainly focused on the detection of deceptive movements in experienced athletes. Our study, on the other hand, employed a blurred training protocol with beginners, aiming to enhance low spatial frequency processing and potentially cross-modal integration. These different objectives and participant populations may contribute to the observed differences in outcomes. Further research is needed to specifically investigate the effects of blurred training on beginners and to elucidate the underlying mechanisms involved.
It is also important to consider the potential differences in cognitive load imposed by the different training protocols. While the randomization procedure and pre-test performance matching were designed to minimize such differences at baseline, the nature of the training protocols themselves may have led to variations in cognitive load during the intervention. For instance, the visual training group, which showed broader improvements across transfer tasks, may have experienced a higher cognitive load during training due to the need to integrate auditory and visual information from the outset, even though the task itself was presented visually. This could have led to more robust cognitive adaptations, as evidenced by their enhanced performance on the transfer tasks, but may not have directly translated into faster response times. The audio-visual group, by focusing on multi-sensory integration, and the blurred group, by dealing with degraded sensory input, could also have faced distinct cognitive challenges during their respective training sessions. These differences in cognitive load during training could have differentially affected performance across the various transfer tasks. Future research should further investigate the cognitive load associated with different types of perceptual training, particularly those involving visual, auditory, or multisensory information49,50, to better understand how these variations impact performance outcomes.
Different intervention protocols demonstrated varying effects across different transfer test tasks. First, in high cognitive load computer test tasks, both the visual training group and the audio-visual training group exhibited better transfer effects, with improvements more pronounced than those observed in computer test tasks. These high load tasks required participants to process both landing area information (primary task) and number discrimination information (sub-task), imposing greater demands on cognitive processing capabilities. Compared to the anticipation tasks, these tasks were more challenging and demanded a higher allocation of attentional resources. According to cognitive load theory51, when attentional resources are dispersed, individuals’ cognitive abilities may become insufficient to manage such complex tasks, making the anticipatory skills enhanced through specialized cognitive training more impactful. The benefits of the interventions become particularly evident when attentional resources are insufficient, with both visual and audio-visual training proving effective in enhancing participants’ performance under these conditions. Second, only the visual training group demonstrated transfer effects to the simulated motor task. This task required participants to react by moving to the predicted landing area and striking a marker, closely mirroring real-world scenarios. Compared to other training methods, the visual training group, which integrated auditory cues at critical moments, significantly improved participants’ ability to utilize visual, auditory, and combined sensory information. This form of visual training reinforced the encoding of visual and auditory information16, which is crucial within the framework of perception-action coupling theory52. Previous research has shown that when athletes respond with actions coupled with perceptual information rather than mere key presses, those with professional training tend to perform better47. The present study also found that the visual training group outperformed others in the simulated motor task, further underscoring the importance of perception-action coupling in athletic contexts. Finally, none of the training groups exhibited significant training effects in the real competition scenario tasks. The experimental materials for these tasks were drawn from World Championship events, featuring athletes of a higher competitive level, whose consistent technical execution made anticipation particularly challenging. This finding suggests that the current intervention programs may have limited efficacy in enhancing the anticipatory abilities of high-level athletes.
The improvement observed in each training group was sustained during the retention test. This study employed random practice in cognitive training, where participants were exposed to shots landing at various locations within each training block, rather than being presented with shots landing at the same location consecutively. Previous research has demonstrated that random practice is more effective than blocked practice in enhancing participants’ ability to judge landing areas and in retaining these skills over time13. In the context of motor skill learning, the reconstruction hypothesis suggests that the interference caused by the random sequence of tasks leads to short-term forgetting, requiring participants to reconstruct movement plans for new tasks. This process promotes the formation of more enduring task representations. Conversely, block practice involves repeatedly performing the same task across consecutive trials, using the same movement plan, without the need for reconstruction53. The elaboration hypothesis further posits that, compared to the repetitive nature of block practice, random practice enhances the durability of representations through increased task comparison and analysis18. Although these two theories offer different mechanisms for the contextual interference effect in skill learning, both attribute the stability of training outcomes to the greater cognitive effort and heightened neural activity elicited by random practice12,54. This likely explains why the training effects observed in this study persisted for over 2 weeks.
Limitations and future directions
The present study provides valuable insights into the effects of training interventions on anticipatory skills in badminton beginners; however, several limitations must be acknowledged. The small sample size, particularly within each training group, limits the generalizability of the findings, as larger samples may yield more robust and reliable evidence. This issue is further exacerbated by the homogeneity of the study population, which consisted solely of university students with similar levels of badminton experience, potentially limiting the applicability of the results to more diverse or elite athletic populations. Additionally, the reliance on computer-based tasks and simulated environments, despite their ecological validity, may not fully capture the complexity of real-world sports contexts. Although high cognitive load tasks and motor simulations were employed to address this limitation, future research should incorporate more naturalistic settings, such as on-court training with live opponents, to enhance the ecological validity and applicability of the findings to actual sports performance. Furthermore, the study did not examine the long-term effects of the training beyond the two-week retention test, which, although informative, does not provide a complete understanding of the durability of the training effects. Future studies should include extended follow-up periods to better assess the sustained impact of these interventions on athletic performance. Moreover, while the study effectively manipulated and measured participants’ ability to utilize information to improve anticipatory performance, it did not explore physiological changes. Subsequent research could employ techniques such as eye-tracking and functional near-infrared spectroscopy (fNIRS) to investigate whether the training enhances visual search efficiency and motor cortex activation, thereby providing a more comprehensive understanding of the underlying mechanisms.
Conclusion
For university students majoring in physical education with 1–2 years of specialized badminton training, a two-week cognitive training program based on audio-visual stimuli, conducted three times per week for 25–30 min per session, can significantly enhance their anticipatory performance. The effects of this training extend to tasks involving high cognitive load and simulated real-world sports scenarios, with the benefits lasting for at least two weeks post-training. The study revealed that the audio-visual training program showed the quickest improvements, whereas the visual training program not only produced more extensive benefits across tasks but also accelerated the onset of these benefits when both visual and auditory cues were present. These findings suggest that cognitive training programs that incorporate a mix of visual and auditory stimuli can significantly enhance the anticipatory skills of the unskilled players. In particular, the strategy of presenting visual cues during the anticipation phase, combined with guided discovery using audio-visual feedback during the feedback phase, appears to optimize learning and retention. This approach could be beneficial in other sports or training contexts where quick decision-making and predictive accuracy are critical.
Data availability
Data is provided within the manuscript or supplementary information files.
References
Morris-Binelli, K. & Müller, S. Advancements to the Understanding of expert visual anticipation skill in striking sports. Can. J. Behav. Sci. 49, 262–268 (2017).
Williams, A. M. & Jackson, R. C. Anticipation in sport: Fifty years on, what have we learned and what research still needs to be undertaken? Psychol. Sport Exerc. 42, 16–24 (2019).
Abernethy, B., Schorer, J., Jackson, R. C. & Hagemann, N. Perceptual training methods compared: the relative efficacy of different approaches to enhancing sport-specific anticipation. J. Exp. Psychol. Appl. 18, 143–153 (2012).
Murphy, C. P. et al. Contextual information and perceptual-cognitive expertise in a dynamic, temporally-constrained task. J. Exp. Psychol. Appl. 22, 455–470 (2016).
Cañal-Bruland, R., Meyerhoff, H. S. & Müller, F. Context modulates the impact of auditory information on visual anticipation. Cogn. Res. Princ Implic. 7, 76 (2022).
Sors, F. et al. The contribution of early auditory and visual information to the discrimination of shot power in ball sports. Psychol. Sport Exerc. 31, 44–51 (2017).
Sors, F. et al. Predicting the length of volleyball serves: the role of early auditory and visual information. PLoS One. 13, e0208174 (2018).
Park, S. H., Kim, S., Kwon, M. & Christou, E. A. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus. Appl. Physiol. Nutr. Metab. 41, 244–248 (2016).
Klatt, S. & Smeeton, N. J. Visual and auditory information during decision making in sport. J. Sport Exerc. Psychol. 42, 15–25 (2020).
Müller, S., Morris-Binelli, K., Hambrick, D. Z. & Macnamara, B. N. Accelerating visual anticipation in sport through Temporal occlusion training: a meta-analysis. Sports Med. (2024).
Ryu, D., Abernethy, B., Park, S. H. & Mann, D. L. The perception of deceptive information can be enhanced by training that removes superficial visual information. Front. Psychol. 9, 1132 (2018).
Broadbent, D. P., Causer, J., Williams, A. M. & Ford, P. R. The role of error processing in the contextual interference effect during the training of perceptual-cognitive skills. J. Exp. Psychol. Hum. Percept. Perform. 43, 1329–1342 (2017).
Broadbent, D. P., Causer, J., Ford, P. R. & Williams, A. M. Contextual interference effect on perceptual–cognitive skills training. Med. Sci. Sports Exerc. 47, 1243–1250 (2015).
Fortes, L. S. et al. Virtual reality promotes greater improvements than video-stimulation screen on perceptual-cognitive skills in young soccer athletes. Hum. Mov. Sci. 79, e102856 (2021).
Smeeton, N. J., Williams, A. M., Hodges, N. J. & Ward, P. The relative effectiveness of various instructional approaches in developing anticipation skill. J. Exp. Psychol. Appl. 11, 98 (2005).
Hommel, B., Müsseler, J., Aschersleben, G. & Prinz, W. The theory of event coding (TEC): a framework for perception and action planning. Behav. Brain Sci. 24, 849–878 (2001).
Allerdissen, M., Güldenpenning, I., Schack, T. & Blasing, B. Recognizing fencing attacks from auditory and visual information: a comparison between expert fencers and novices. Psychol. Sport Exerc. 31, 123–130 (2017).
Shea, J. B. & Zimny, S. T. Knowledge incorporation in motor representation. In Complex movement behaviour: The motor-action controversy (eds Meijer, O. N. & Roth, K.) 289–314 (North Holland, 1988).
Williams, A. M., Davids, K. & Williams, J. G. P. Visual Perception and Action in Sport (Routledge, 1999).
Alder, D. B., Ford, P. R., Causer, J. & Williams, A. M. The effect of anxiety on anticipation, allocation of attentional resources, and visual search behaviours. Hum. Mov. Sci. 61, 81–89 (2018).
Williams, A. M., Ward, P., Knowles, J. M. & Smeeton, N. J. Anticipation skill in a real-world task: measurement, training, and transfer in tennis. J. Exp. Psychol. Appl. 8, 259–270 (2002).
Bischoff, M. et al. Anticipating action effects recruits audiovisual movement representations in the ventral premotor cortex. Brain Cogn. 92, 39–47 (2014).
Müller, S. & Abernethy, B. An expertise approach to training anticipation using temporal occlusion in a natural skill setting. Technol. Instruct Cogn. Learn. 9, 295–312 (2014).
Put, K., Wagemans, J., Spitz, J., Williams, A. M. & Helsen, W. F. Using web-based training to enhance perceptual-cognitive skills in complex dynamic offside events. J. Sports Sci. 34, 181–189 (2016).
Fazel, F., Morris, T., Watt, A. & Maher, R. The effects of different types of imagery delivery on basketball free-throw shooting performance and self-efficacy. Psychol. Sport Exerc. 39, 29–37 (2018).
Pagé, C., Bernier, P. M. & Trempe, M. Using video simulations and virtual reality to improve decision-making skills in basketball. J. Sports Sci. 37, 2403–2410 (2019).
Wang, X. T., Ren, P. F., Miao, X. Y., Zhang, X. & Qian, Y. M. Chi, L. Z. Attention load regulates the facilitation of audio-visual information on landing perception in badminton. Percept. Mot Skills. 130, 1687–1713 (2023).
Cohen, J. Statistical Power Analysis for the Behavioural Sciences, 2nd Edn (Academic Press, 1988).
Kingsley, A. F., Noordewier, T. G. & Vanden Bergh, R. G. Overstating and understating interaction results in international business research. J. World Bus. 52, 286–295 (2017).
Galli, M. et al. Guided versus standard antiplatelet therapy in patients undergoing percutaneous coronary intervention: a systematic review and meta-analysis. Lancet 397, 1470–1483 (2021).
Wang, X., Piantadosi, S., Le-Rademacher, J. & Mandrekar, S. J. Statistical considerations for subgroup analyses. J. Thorac. Oncol. 16, 375–380 (2021).
Hommel, B. Action control according to TEC (theory of event coding). Psychol. Res. 73, 512–526 (2009).
Keysers, C. & Gazzola, V. Expanding the mirror: vicarious activity for actions, emotions, and sensations. Curr. Opin. Neurobiol. 19, 666–671 (2009).
Rizzolatti, G. & Sinigaglia, C. The functional role of the parieto-frontal mirror circuit: interpretations and misinterpretations. Nat. Rev. Neurosci. 11, 264 (2010).
Turella, L., Wurm, M. F., Tucciarelli, R. & Lingnau, A. Expertise in action observation: recent neuroimaging findings and future perspectives. Front. Hum. Neurosci. 7, 637 (2013).
Abernethy, B. & Zawi, K. Pick-up of essential kinematics underpins expert perception of movement patterns. J. Mot Behav. 39, 353–367 (2007).
Loiseau-Taupin, M., Ruffault, A., Slawinski, J., Delabarre, L. & Bayle, D. Effects of acute physical fatigue on gaze behavior and performance during a badminton game. Front. Sports Act. Living. 3, 725625 (2021).
Heitz, R. P. The speed-accuracy tradeoff: history, physiology, methodology, and behavior. Front. Neurosci. 8, 150 (2014).
Bherer, L. et al. Training effects on dual-task performance: are there age-related differences in plasticity of attentional control? Psychol. Aging. 20, 695 (2005).
Lövdén, M., Bäckman, L., Lindenberger, U., Schaefer, S. & Schmiedek, F. A theoretical framework for the study of adult cognitive plasticity. Psychol. Bull. 136, 659 (2010).
Williams, A. M. & Davids, K. Visual search strategy, selective attention, and expertise in soccer. Res. Q. Exerc. Sport. 69, 111–128 (1998).
Cañal-Bruland, R., Müller, F., Lach, B. & Spence, C. Auditory contributions to visual anticipation in tennis. Psychol. Sport Exerc. 36, 100–103 (2018).
Van der Burg, E., Olivers, C. N., Bronkhorst, A. W. & Theeuwes, J. Pip and Pop: nonspatial auditory signals improve Spatial visual search. J. Exp. Psychol. Hum. Percept. Perform. 34, 1053–1065 (2008).
Vroomen, J. & de Gelder, B. Sound enhances visual perception: Cross-modal effects of auditory organization on vision. J. Exp. Psychol. Hum. Percept. Perform. 26, 1583–1590 (2000).
Laffere, A., Dick, F. & Tierney, A. Effects of auditory selective attention on neural phase: individual differences and short-term training. NeuroImage 213, 116717 (2020).
Jackson, R., Abernethy, B. & Wernhart, S. Sensitivity to fine-grained and coarse visual information: the effect of blurring on anticipation skill. Int. J. Sport Psychol. 40, 461–475 (2009).
Mann, D. L., Abernethy, B. & Farrow, D. Visual information underpinning skilled anticipation: the effect of blur on a coupled and uncoupled in situ anticipatory response. Atten. Percept. Psychophys. 72, 1317–1326 (2010).
Park, S. H. et al. Falling for a fake: the role of kinematic and non-kinematic information in deception detection. Perception 48, 330–337 (2019).
Shibata, K., Watanabe, T., Sasaki, Y. & Kawato, M. Perceptual learning incepted by decoded fMRI neurofeedback without stimulus presentation. Science 334, 1413–1415 (2011).
Seitz, A. R. & Watanabe, T. A unified model for perceptual learning. Trends Cogn. Sci. 9, 329–334 (2005).
Sweller, J., Ayres, P. & Kalyuga, S. Cognitive Load Theory (Springer, 2011).
Wolpert, D. M., Ghahramani, Z. & Jordan, M. I. An internal model for sensorimotor integration. Science 269, 1880–1882 (1995).
Schmidt, R. A. & Lee, T. D. Motor Control and Learning: A Behavioural Emphasis, 5th edn 488–489 (Human Kinetics, 2011).
Kantak, S. S., Mummidisetty, C. K. & Stinear, J. W. Primary motor and premotor cortex in implicit sequence learning–evidence for competition between implicit and explicit human motor memory systems. Eur. J. Neurosci. 36, 2710–2715 (2012).
Acknowledgements
This work was supported by the Scientific Research Program Funded by the Education Department of Shaanxi Provincial Government (Program No. 22JK0220) and the Qiongtai Normal University Key Laboratory of Child Cognition & Behavior Development of Hainan Province (Program No. 2024KF01).
Author information
Authors and Affiliations
Contributions
XTW and LZC conceptualized the research; PFR and XYM executed the experiment and analysised the data; XTW wrote the original draft and prepared all figures; XTW, PFR and LZC reviewed and edited the paper. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Wang, X., Ren, P., Miao, X. et al. Multisensory training enhances anticipation skills in badminton novices. Sci Rep 15, 9862 (2025). https://doi.org/10.1038/s41598-025-93475-7
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-93475-7









