Introduction

Cerebral palsy (CP) disorders result from non-progressive brain damage during fetal or infant brain development, impacting movement and posture development1,2. Globally, CP has an estimated incidence rate ranging from 0.20 to 0.35%, with approximately 5 million CP patients in China3,4,5,6. CP is one of the major causes of physical disabilities in childhood, leading to diverse types and levels of secondary functional impairments and damage during child development, thereby imposing significant economic and psychological burdens on families7. The current primary treatment approach for CP is rehabilitation training, comprising therapy in hospitals or specialized rehabilitation institutions and home-based rehabilitation training8. Hospital- or institution-based rehabilitation therapy not only results in substantial economic costs but also requires significant human and time resources from families9,10. Owing to disparities in regional development in China, most remote areas lack the essential medical infrastructure to cater to the basic rehabilitation needs of CP patients11,12. Thus, identifying a reliable and convenient approach to promote the use of home-based rehabilitation training for families affected by CP is crucial.

The rapid evolution of social media has transformed it into a crucial tool for disseminating knowledge, exchanging information, and sharing opinions among individuals13,14,15. Social media platforms, such as TikTok, Kwai, Weibo, Bilibili, and RED, have emerged as the dominant Chinese social media outlets16,17. The highly interactive nature of social media motivates users to turn to these platforms to obtain medical knowledge and make healthcare decisions18,19,20. However, this growing trend has a downside. Chinese users have demonstrated a strong interest in unconventional remedies, resulting in dishonest medical promoters spreading misleading and inaccurate information21,22.

Consequently, this surge of unreliable content poses potential health risks to consumers. In contrast to studies focusing on online video content regarding CP on YouTube, CP videos on YouTube are mostly inaccessible to average citizens in mainland China23,24,25. This disparity highlights a significant research gap in evaluating the prevalence and quality of Chinese videos depicting CP on domestic social media platforms. This study aims to assess the reliability and quality of Chinese videos related to cerebral palsy on local social media platforms, helping to bridge the gap in this area.

Method

Ethical considerations

This study was conducted without the involvement of clinical data, human specimens, or laboratory animals. All information used in the research was sourced from publicly available videos on Chinese social media platforms, thereby ensuring the protection of personal privacy. As there was no interaction with users, the requirement to obtain informed consent was waived. The study received ethical approval from the ethics committee of Lanzhou University Second Hospital (approval 2023 A-419) and was registered at chictr.org (ChiCTR2300074750).

Data collection

Between August 21, 2023, and August 31, 2023, a search was conducted on various social media platforms, including TikTok (Chinese version: www.douyin.com), Kwai (Chinese version: www.kuaishou.com), Weibo (https://weibo.com), Bilibili (www.bilibili.com), and RED (www.xiaohongshu.com), using the search term “cerebral palsy.” All videos on each platform were ranked, and two independent searchers sequentially viewed them in descending order. The inclusion criteria for evaluation were videos in Chinese featuring primary content relevant to CP. Any discrepancies between the two searchers were resolved through a consensus discussion with a third party.

In our study, we developed and implemented an explicit consensus discussion protocol to address discrepancies in video selection between two researchers. The protocol was structured into four main steps to ensure a systematic approach. Initially, two independent researchers conducted searches on the same social media platform using identical keywords, meticulously recording all results and noting any discrepancies. Following this, each researcher independently screened the videos they found, applying predefined inclusion and exclusion criteria. For videos where disagreements arose during the screening process, a review panel was convened. This panel included the original searcher and at least one external domain expert, who collaboratively discussed and determined whether the videos met the study’s requirements. The decision-making process was guided by principles of global consistency, integrity, unbiasedness, validity verification, and time-bound completion, with an emphasis on openness and transparency. This approach ensured that the relevance and quality of the videos were thoroughly evaluated and aligned with the study’s objectives. The roles involved in this process included two professionals responsible for conducting the search and initial filtering, as well as at least one external consultant with expertise in social media analytics or health information dissemination. By formally documenting the entire deliberation process and its conclusions, the team ensured that the final sample was selected in a scientifically rigorous and transparent manner, thereby enhancing the study’s overall reliability and impact.

Furthermore, videos that met the inclusion criteria were subjected to further review. The exclusion criteria were duplicates, advertisements, irrelevant content (keywords that do not mention or relate to cerebral palsy in the content), and videos in languages other than Chinese.

Previous studies that utilized the DISCERN tool for online video research used a Cohen’s effect size of 0.46 for sample size calculation in G*Power 3.126,27. With a power set at 80%, an error tolerance of 0.05, and accounting for potential loss of video segments, a total of 300 videos were determined to be necessary. Subsequently, 60 videos that met the selection criteria were chosen from the top results on various platforms, including TikTok, Kwai, Weibo, Bilibili, and RED. Initially, 500 videos related to CP were obtained from these platforms. After filtering out 48 advertisements, 27 duplicate videos, 69 irrelevant videos, and 12 non-Chinese videos, 344 eligible videos were included in further analysis. The research framework is depicted in (Fig. 1).

Fig. 1
figure 1

Flow chart of video selection.

Videos categorization

The data were collected separately from the five social media platforms, with each video being assessed for various details such as upload date, video content, video creator’s identity, creator’s attitude, video duration, and interactive behaviors. The video content analysis predominantly centered on four main aspects: providing knowledge on CP, offering guidance on CP rehabilitation training (such as training intensity, frequency, and precautions), showcasing news about CP, and vlogs documenting the daily lives of individuals living with CP. The identities of video creators were categorized as experts, relevant medical institutions, or amateur enthusiasts. Experts included medical professionals, specialists, researchers, and healthcare workers actively engaged in CP research and medical practices. Relevant institutions included hospitals, rehabilitation centers, medical media, and government healthcare organizations. Amateur enthusiasts were defined as individuals lacking relevant medical or research backgrounds. Moreover, interactive behaviors were scrutinized to gauge audience engagement and video popularity, encompassing likes, comments, bookmarks, and shares.

Content analysis of informational videos

The accuracy and reliability of the videos were assessed using the Journal of the American Medical Association (JAMA) Benchmarks28 (Table 1). JAMA evaluation criteria, consisting of four standards: authorship, attribution, disclosure, and general information, were applied to evaluate online video content and resources29. Authorship requires videos to include information about authors, contributors, and contact details. Attribution necessitates proper crediting of references and sources. Disclosure involves disclosing relationships with interests, funding, sponsorship, advertising, support, and video ownership. General information calls for the indication of the date of video publication and updates. Each criterion was scored on a point scale. A score of 0 represents low video quality and accuracy, while a score of 4 indicates high video quality and accuracy.

Table 1 JAMA benchmarks.

The overall quality of the videos was scored based on the quality and utility of the provided information. The educational value of each video was assessed using the 5-point Global Quality Scale (GQS)30,31 (Table 2). GQS scores range from 1 to 5, with higher scores indicating better educational quality. Results were categorized as low quality (1–2 points), moderate quality (3 points), and high quality (4–5 points).

Table 2 Global quality scale.

Moreover, to assess the quality and reliability of the health information presented in the videos under consideration, an advanced identification tool, the DISCERN five-point assessment tool, was employed, as elaborated by Kopyigit et al.32. The tool consisted of a series of questions that required a binary response of “yes” or “no,” with one point being awarded for each “yes” answer33,34,35. The total score achieved using this tool is 5 points per the scoring system detailed in (Table 3).

Table 3 Questions adapted from the DISCERN tool for evaluating the reliability of videos (1 point is given for every yes answer and 0 for no answer; Charnock et al.33).

Statistical analysis

Statistical analysis was conducted using SPSS (version 27.0, IBM Corp). Descriptive statistics were presented as means (standard deviation [SD]). Categorical data were expressed as frequencies and percentages (%). The Kruskal-Wallis rank sum test was employed to analyze non-parametric data among groups. Adjusted P-values were calculated using the Bonferroni method for post-hoc pairwise comparisons to assess the significance among multiple groups. P < .05 was considered statistically significant.

Results

General information of videos

344 videos published between 2018 and 2023 were obtained from five social media platforms. Most videos (n = 256, 74.42%) were released in 2022 and 2023 (Fig. 2). The average video duration across all platforms was 92.12 s (SD 105.69) (Fig. 3). The descriptive statistical data for videos from various sources and content types are presented in Table 4. Specifically, the search on TikTok yielded 77 videos with 2,234,258 likes, 43,868 comments, 14,248 saves, 18,392 shares, and an average video duration of 90.86 s (SD 90.97). The Kwai search yielded 73 videos with 37,050 likes, 4,641 comments, 5,752 saves, and an average video duration of 70.81 s (SD 81.67). The Weibo search yielded 63 videos with 7,946 likes, 662 comments, 440 shares, and an average video duration of 108.71 s (SD 97.85). The Bilibili search produced 64 videos with 13,462 likes, 430 comments, 2,887 saves, 1,184 shares, and an average video duration of 124.66 s (SD 123.88). The RED search retrieved 67 videos with 5,403 likes, 1,032 comments, 2,700 saves, and an average video duration of 70.12 s (SD 123.78). Notably, analysis of the video sources showed that experts posted 40.70% (140/344) of the videos, while health-related institutions accounted for 32.85% (113/344) and amateurs for 26.45% (91/344) of the content (Figs. 4 and 5). Further analysis of the video content revealed that rehabilitation training was the most prevalent theme, accounting for 45.64% (157/344) of all videos, followed by videos focusing on disease knowledge (24.42%, 84/344), news (9.01%, 31/344), and vlogs (20.93%, 72/344), respectively (Fig. 6).

Fig. 2
figure 2

A line chart displays 344 eligible videos on cerebral palsy released from 2018 to 2023.

Fig. 3
figure 3

The length of videos related to cerebral palsy on 5 social media platforms.

Table 4 Descriptive statistics for videos of different sources and content types.
Fig. 4
figure 4

Bar graph depicting video producers’ identity across 5 social media platforms.

Fig. 5
figure 5

Stacking diagram displays the content analysis of cerebral palsy-related videos by 3 kinds of video producers.

Fig. 6
figure 6

Stacking diagram shows the content analysis of cerebral palsy-related videos across 5 social media platforms.

Video quality assessments

The mean scores of the 344 videos were 1.62 (SD 0.60) for JAMA, 2.05 (SD 0.99) for GQS, and 1.26 (SD 1.26) for DISCERN. JAMA, GQS, and DISCERN scores were also compared across various platforms, sources, and content types (Fig. 7).

Fig. 7
figure 7

Ridge plot displaying the combined distribution of JAMA, GQS, and DISCERN scores.

JAMA, GQS, and DISCERN scores were analyzed using different platforms, including TikTok, Kwai, Weibo, Bilibili, and RED (Fig. 8). For JAMA scores, the platforms scored as follows: Weibo had the highest score of 1.79 (SD 0.68), followed by TikTok with a score of 1.68 (SD 0.64), Kwai with 1.64 (SD 0.61), Bilibili with 1.59 (SD 0.50), and RED with a score of 1.39 (SD 0.49). Significant differences were observed in JAMA scores among the platforms (H = 14.53, P < .05), with post hoc analyses indicating that RED scored significantly lower than TikTok, Kwai, Weibo, and Bilibili (P < .05). Regarding GQS scores, the platforms scored as follows: TikTok had the highest score of 2.39 (SD 1.07), followed by Weibo with a score of 2.25 (SD 1.02), Bilibili with 2.09 (SD 0.89), Kwai with 1.78 (SD 1.04), and RED with 1.73 (SD 0.73). There were significant differences in GQS scores among the platforms (H = 25.34, P < .001), with post hoc analyses revealing that TikTok, Weibo, and Bilibili had significantly higher scores than Kwai and RED (P < .05). In terms of DISCERN scores, the platforms scored as follows: TikTok had the highest score of 1.71 (SD 1.46), followed by Weibo with a score of 1.62 (SD 1.31), Bilibili with 1.22 (SD 1.09), Kwai with 0.92 (SD 1.18), and RED with 0.79 (SD 0.93). Significant differences existed in DISCERN scores among the platforms (H = 27.25, P < .001), with post hoc analyses indicating that TikTok and Weibo had significantly higher scores than both Kwai and RED (P < .05).

Fig. 8
figure 8

JAMA, GQS, and DISCERN scores for videos of different platforms.

Meanwhile, JAMA, GQS, and DISCERN scores were analyzed across different sources (Fig. 9). Experts, health-related institutions, and amateurs achieved JAMA scores of 1.53 (SD 0.58), 1.61 (SD 0.71), and 1.77 (SD 0.42), respectively. Significant differences in JAMA scores were observed among the three sources (H = 13.35, P < .05). Further analysis through post hoc pairwise comparisons with Bonferroni correction revealed that amateurs received significantly higher JAMA scores than experts and health-related institutions (P < .05). Regarding GQS scores, experts, health-related institutions, and amateurs have attained scores of 2.41 (SD 1.03), 2.03 (SD 0.93), and 1.54 (SD 0.75), respectively. Notably, statistically significant differences in GQS scores were found across the three sources (H = 45.14, P < .001). Subsequent post hoc pairwise comparisons with Bonferroni correction indicated that experts had significantly higher GQS scores than health-related institutions (P < .0.05), and health-related institutions scored significantly higher than amateurs (P < .0.05). For DISCERN scores, the study revealed that experts, health-related institutions, and amateurs obtained scores of 1.71 (SD 1.38), 1.19 (SD 1.16), and 0.63 (SD 0.86), respectively. DISCERN scores significantly differed among the three sources (H = 39.67, P < .0.001). The post hoc pairwise comparisons with Bonferroni correction indicated that experts scored significantly higher in DISCERN than health-related institutions (P < .05), and health-related institutions scored significantly higher than amateurs (P < .0.05).

Fig. 9
figure 9

JAMA, GQS, and DISCERN scores for videos of different sources.

Subsequently, JAMA, GQS, and DISCERN scores were assessed within different content types, and distinct patterns emerged (Fig. 10). Regarding JAMA scores, disease knowledge, rehabilitation training, news, and vlogs, received ratings of 1.77 (SD 0.59), 1.55 (SD 0.60), 1.74 (SD 0.63), and 1.54 (SD 0.55), respectively. Further statistical analysis revealed significant variations in JAMA scores across different content categories (H = 10.94, P < .05). Post hoc pairwise comparisons with Bonferroni correction highlighted a significant difference, with the JAMA score for disease knowledge significantly surpassing that for rehabilitation training (P < .05). For GQS scores, disease knowledge, rehabilitation training, news, and vlogs received scores of 2.31 (SD 1.09), 2.03 (SD 0.91), 2.00 (SD 1.06), and 1.83 (SD 0.96), respectively. Similarly, significant differences were observed in GQS scores across content categories (H = 8.89, P < .05). Notably, the GQS score for disease knowledge was superior to that for vlogs based on post hoc pairwise comparisons with Bonferroni correction (P < .05). Concerning DISCERN scores, disease knowledge, rehabilitation training, news, and vlogs received ratings of 1.48 (SD 1.31), 1.19 (SD 1.19), 1.48 (SD 1.34), and 1.04 (SD 1.30), respectively. Compared with JAMA and GQS scores, no statistically significant differences in DISCERN scores were observed among the various content categories (H = 6.69, P = .082).

Fig. 10
figure 10

JAMA, GQS, and DISCERN scores for videos of different content types.

Correlation analysis

The Spearman correlation analysis demonstrated a strong positive correlation between likes and comments (r = .87, P < .001). However, other variables showed either a very weak correlation or no correlation. Specifically, likes exhibited a very weak negative correlation with duration (r = −0.20, P < .001) but showed no statistically significant correlation with JAMA (r = .11, P = .035). Similarly, comments had a weak negative correlation with duration (r = −0.19, P < .001) but exhibited no significant correlation with GQS (r = −0.18, P < .001) or DISCERN (r = −0.16, P = .003). Notably, comments showed a significant positive correlation with JAMA (r = .15, P = .004). These findings are summarized in (Table 5).

Table 5 Spearman correlation analysis between video variables and video quality scores.

Discussion

Nowadays, the utilization of social media for obtaining health-related information has become increasingly common. Therefore, the current study sought to address the paucity of data on the characteristics of Chinese social media content related to children with CP, mainly focusing on social media within mainland China. Furthermore, the study assessed video quality using established tools such as JAMA, GQS, and DISCERN. The potential correlations between video quality and various video features, including duration, likes, and comments, were also explored.

The analysis included 344 social media videos on CP in mainland China, with over 70% published in the last two years. This observation suggests that individuals affected by CP and their families tend to prioritize recent content when seeking information about the condition on social media platforms. The preference for newer content may be attributed to the platforms’ algorithms to promote the content36. User engagement metrics such as likes, comments, saves, and shares serve as indicators of video popularity37,38. Notably, a positive correlation was found between likes and comments, implying that more popular videos are more likely to receive promotion. Among the social media platforms studied, videos posted on TikTok were the most popular. Therefore, in disseminating educational videos concerning CP, emphasis should be placed on leveraging TikTok for content sharing while encouraging audience interaction and periodically refreshing or reposting content every two years.

The consistently low average scores indicate that videos about CP on Chinese social media platforms generally receive lower ratings for information disclosure, overall quality, and reliability. Consequently, the overall performance of these videos can be deemed unsatisfactory. It is vital to highlight the significance of informing patients, their families, and healthcare professionals about the misleading nature of these videos, which may hinder the effectiveness of rehabilitation programs.

Based on observations across the five platforms, RED performed the worst regarding information disclosure, with Kwai and RED receiving the lowest scores in overall quality and the lowest reliability. Meanwhile, the RED platform displayed the poorest performance regarding information disclosure. This deficiency can be attributed to RED’s primary focus as a social platform for sharing personal experiences, where content producers prioritize sharing individual experiences over disseminating professional medical knowledge39,40. Consequently, detailed information regarding CP, professional knowledge during rehabilitation, and the latest research developments on the disease are susceptible to neglect or misinformation within this environment. For overall quality, Kwai and RED also garnered the lowest scores, which may be associated with the user demographics and content production patterns on these platforms. Both Kwai and RED boast a significant user base of young individuals, many of whom lack medical backgrounds or professional training, leading to videos that lack accuracy, professionalism, and content depth30,41,42. Furthermore, content producers on these platforms often prioritize attractiveness and entertainment value over accuracy and scientific rigor, consequently impacting the overall quality of the content21,25. Kwai and RED also demonstrated poor reliability performance. This outcome could be attributed to the platforms’ content review mechanisms and user interaction patterns, suggesting that these platforms lack stringent medical content review mechanisms, thereby allowing the circulation of a substantial amount of inaccurate or misleading information43,44. Moreover, user interactions and comments may exacerbate this issue as non-professional discussions and advice could further misinform the audience45,46.

Further analysis of various sources and content types revealed that information on CP shared by experts was rated the highest in quality and reliability. This underscores the significance of expert opinions in delivering precise and dependable information. Experts are known for their extensive medical knowledge and practical experience, which enables them to provide evidence-based advice and guidance47. Compared with laypersons or non-professionals, experts are more inclined to furnish accurate and comprehensive information to assist families managing CP patients in making well-informed decisions48. Despite the superior quality of information provided by experts, it is noteworthy that their information may be limited in quantity and accessibility. Particularly on social media platforms, the content posted by experts may need more richness and diversity found in posts by ordinary users or non-professionals. This limitation could present challenges for families of individuals with CP accessing relevant and varied information. Hence, along with promoting information sharing by experts, it is imperative to explore alternative strategies to improve the accessibility and diversity of available information.

We found that videos about cerebral palsy on Chinese social media platforms are generally of low quality, which significantly impacts family decision-making. Families often rely on this video content when seeking information and support related to cerebral palsy. However, due to the lack of video quality, families may receive inaccurate or misleading information, which may lead them to make decisions based on misinformation when developing treatment plans and rehabilitation strategies. For example, certain videos may exaggerate the effectiveness of specific treatments while ignoring scientific evidence and professional advice, thus affecting families’ trust and choice of treatment options. Low-quality videos may also hinder the effectiveness of rehabilitation programs as they may provide incomplete or incorrect rehabilitation guidance, leading patients to take inappropriate measures that may not only delay the rehabilitation process but also have negative health consequences. If patients and families rely excessively on such information, they may even ignore the advice of healthcare professionals, further hindering the implementation of a professional rehabilitation program. These consequences highlight the importance of improving the quality of cerebral palsy-related information on social media. To ensure that patients and families have access to accurate health information, there is a need to strengthen platform regulation, improve the professionalism of content creators, and encourage the involvement of medical professionals in content creation and review. Future research should delve deeper into the specific effects of low-quality videos and explore ways to mitigate these problems through education and intervention while taking advantage of social media to improve the public’s ability to recognize high-quality health information.

Several key measures should be implemented to enhance the reliability and accuracy of CP rehabilitation information. Firstly, it is recommended that platforms should collaborate with professional organizations to strengthen the moderation and management of CP rehabilitation information. By establishing user feedback mechanisms, platforms can elevate standards for information disclosure, overall quality, and reliability. These efforts are essential in reducing the dissemination of false or misleading information. Secondly, healthcare professionals and relevant organizations should proactively disseminate accurate and reliable CP rehabilitation information on social media. This approach aims to enhance public awareness and understanding of CP rehabilitation. Finally, governments and relevant organizations should intensify promotional and educational efforts surrounding CP rehabilitation information. Increasing societal attention and support for families with CP patients ensures they have access to accurate and valuable information from reliable sources.

This is the first study to evaluate the quality and reliability of CP-related videos on social media in mainland China using a combination of tools such as JAMA, GQS, and DISCERN. Analysis of the relationship between likes, comments, video duration, and video quality revealed a positive correlation between likes and comments. However, this study has some limitations. The tools employed, namely JAMA, GQS, and DISCERN, are primarily used in the assessment of the quality of medical information and may have constraints when applied to the context of family-based CP rehabilitation49,50. Furthermore, our analysis focused solely on Chinese videos within mainland China, potentially neglecting pertinent videos in other languages, thereby introducing regional and cultural biases to the study outcomes.

Conclusion

Chinese online short videos offer a convenient way for families with CP patients to access information. However, the reliability of these videos poses serious concerns. Therefore, social media platforms should strengthen the review and management of medical content, improve content professionalism and accuracy, and ensure that users can access valuable information while browsing.