Abstract
Stroke remains a significant global health concern. Despite numerous stroke-related videos on social media, research evaluating their information quality across platforms remains limited. This study compares information quality and content of stroke-related videos on BiliBili, Douyin and Xiaohongshu. This study analyzed 227 stroke-related videos across three platforms (58 from BiliBili, 88 from Douyin and 81 from Xiaohongshu). Information quality was assessed using adapted HONCode and PEMAT-A/V standards. Content analysis examined stroke aspects (definition, symptoms, etiology, assessment, treatment, outcome, complications, risk factors and prevention). Statistical analyses included Mann–Whitney U, Kruskal–Wallis and Spearman correlation analysis. Overall, videos showed moderate information quality (72.7% achieving medium HONCode levels). Compliance rates were 4.0% for Principle 4 (source reference) and 2.2% for Principle 5 (evidence for claims). Videos showed higher understandability (median 0.73; IQR 0.2) but suboptimal actionability (median 0.67; IQR 1.0). Content completeness was low (median 2.00, IQR 3.0), with treatment (63.0%) and symptoms (55.1%) mentioned most frequently and assessment (15.4%) and complications (15.0%) less frequently. Spearman’s correlation analysis indicated that there were mostly no correlations between video information quality and user engagement (Likes, Collections, Comments, Shares) on the three platforms. Specifically, among the three platforms, Douyin had significantly higher information quality (P < 0.001), while Xiaohongshu showed lower understandability (P = 0.032) and content completeness (P < 0.001). Stroke videos’ information quality on BiliBili, Douyin and Xiaohongshu was generally moderate, but commonly lacked evidence support, actionability and content completeness. Among these platforms, Douyin demonstrated relatively better performance, while Xiaohongshu showed poorer understandability and completeness. This study recommends that video publishers focus on enhancing evidence support and actionability, particularly regarding stroke assessment and complications, to help the public access more accurate and complete stroke information. For BiliBili and Xiaohongshu, increasing medical professional participation is recommended to improve information quality. Xiaohongshu needs to improve content understandability by using simpler and clearer language to explain stroke-related knowledge.
Introduction
Stroke remains the second leading cause of death and the third leading cause of disability worldwide, significantly impacting global public health1,2. According to the World Stroke Organization-Lancet Neurology Commission, global stroke deaths are projected to increase by 50% from 2020 to 2050, while stroke-related disabilities will rise by 31% during the same period3. In the face of this increasingly concerning trend, access to reliable stroke-related health information is becoming especially important for both healthcare providers and patients.
The rapid development of social media platforms has enabled more people to access health information. Users can now share their experiences and express health needs on these platforms4,5. Social media has become an important channel for health information due to its convenience and interactivity6,7. However, this method of access also brings potential risks, including varying information quality and the spread of misinformation8,9. These risks make it particularly important to evaluate the reliability of various health information on social media platforms. China’s mainstream social media platforms include BiliBili, Douyin, Xiaohongshu, Kuaishou, WeChat, and Weibo, each serving different purposes in the digital landscape. For this study, we selected BiliBili, Douyin, and Xiaohongshu as our research platforms. BiliBili is a video-sharing platform popular among younger users, with 348 million active users monthly. It focuses on long-form professional content10. Douyin is a leading short-form platform with 1.5 billion monthly active users. It provides concise and easy-to-understand information snippets11. Xiaohongshu integrates socialization and content sharing with over 300 million users12. These platforms were selected because they are primarily designed for public video content sharing and discovery, providing systematic search capabilities essential for comprehensive health information analysis. WeChat was excluded as it primarily functions as a private messaging platform13, while Weibo focuses on text-based microblogging14 and Kuaishou has significant user overlap with Douyin15, making these three selected platforms most suitable for video-based health information research. These selected platforms, through their diverse content formats and audience reach, constitute the main channels for health information dissemination in China.
With the proliferation of health content on social media, researchers have been systematically evaluating the information quality of health-related videos on mainstream social media platforms, including BiliBili, Douyin and Xiaohongshu. Previous research results indicate varying information quality of videos across different diseases. Studies on breast cancer, lung cancer and thyroid nodules found that the information quality of video information on BiliBili and Douyin platforms was significantly inadequate16,17,18. Meanwhile, studies on diabetes, thyroid cancer and gastroesophageal reflux considered the information quality on relevant platforms to be satisfactory19,20,21. Additionally, a study on colorectal polyp-related videos on Douyin, WeChat and Xiaohongshu platforms similarly found that the information quality and reliability of these videos were insufficient22. While existing research has evaluated the information quality of health videos related to various diseases, studies assessing local social media videos for stroke, a common and serious disease, remain limited. Current stroke-related video research primarily focuses on the international YouTube platform, including analyses of symptom recognition, acute stroke video information quality and social media activity23,24,25. However, comprehensive multi-platform analysis of stroke-related videos on mainstream Chinese platforms such as BiliBili, Douyin and Xiaohongshu remains insufficient. Therefore, it is necessary to evaluate and analyze the information quality and content characteristics of stroke-related videos on BiliBili, Douyin and Xiaohongshu.
Materials and methods
Search strategy
Basic searches were conducted on December 11, 2024, on BiliBili, Douyin and Xiaohongshu platforms using the standardized medical term “stroke” as the search keyword26. Although colloquial synonyms for stroke are commonly used, this study selected the standardized terminology because it has been widely adopted in previous social media health studies27,28,29, provides greater precision, and aligns with clinical guidelines30, ensuring methodological consistency and research rigor.
To ensure standardized search conditions and minimize algorithm bias, we implemented several control measures. Researchers created new accounts without viewing history to avoid personalized content recommendations. All three platforms were searched on the same day using identical procedures, with searches conducted at BiliBili (9:00–9:30 a.m.), Douyin (10:00–10:30 a.m.), and Xiaohongshu (11:00–11:30 a.m.). The 30-min intervals between searches allowed for systematic data recording while minimizing cross-platform algorithm learning effects. To further control the algorithm’s influence, we avoided clicking on videos during the search phase, used incognito browsing mode, cleared the browser cache between platforms, and maintained consistent search query formatting. All search results were immediately saved in Excel to preserve the exact ranking order. For reproducibility, we recorded search timestamps, captured screenshots of search results, documented complete video URLs with ranking positions, and noted any platform-specific interface variations.
This study selected the top 100 videos from each platform (300 videos total across three platforms) using each platform’s default search ranking function, which automatically arranges content based on factors such as relevance, popularity and recency. While each platform employs different ranking algorithms, using the default search settings reflects how ordinary users access health information on these platforms, ensuring ecological validity of the research findings. This study selected the top 100-ranked videos on each platform as samples for two reasons. First, the method established in previous YouTube health video ranking research indicates that the top 100 ranked videos have significantly higher stability than other videos27. Second, recent studies on gallstones on TikTok and atopic dermatitis on YouTube also used the top 100 videos as samples, confirming that this sample size is suitable for health content analysis on social media28,29. This approach ensures sufficient statistical power while maintaining focus on content most likely to be accessed by general users seeking stroke information.
Video screening and data collection
Data extraction from video pages and publisher homepages was completed within 48 h (December 12–13, 2024) to minimize temporal variations in engagement metrics. Each video’s characteristics and engagement data were recorded at the time of access to maintain consistency across all measurements. Exclusion criteria included duplicate videos (both within-platform and cross-platform duplicates), non-stroke content, purely commercial advertisements and non-Chinese content.
Two independent researchers (A and B) conducted the screening. Researcher A performed initial identification, while Researcher B verified selections. During initial screening, both independently excluded videos failed to meet criteria based on titles and descriptions. Subsequently, both researchers manually reviewed the initially screened videos to confirm stroke-related content. Duplicates were identified by comparing video IDs and content within platforms, and by applying cross-platform duplicate criteria based on content characteristics, including substantially similar titles, comparable duration, visually identical key content, and the same core medical information. For cross-platform duplicates, we distinguished original content from reposts by examining account creation dates, original watermarks, and content alignment with publisher expertise, then prioritized retention based on account verification status, upload chronology, video quality, and credential completeness. Non-Chinese videos were directly identified and excluded by observing the language content. Commercial advertisements were identified by product promotions or purchase links. Inter-rater reliability was assessed using Cohen’s Kappa coefficient. Disagreements were resolved through discussion or by consulting a third researcher (C).
Researchers A and B manually extracted data for all 227 videos by visiting each video page and the publisher’s homepage. They collected video duration and engagement metrics, including likes, collections, comments, shares and video sources. Sources were categorized as health professionals (clinicians, nurses, pharmacists, healthcare workers with health credentials who display qualifications or affiliated health institutions on their profile), general users (individuals without healthcare backgrounds), science communicators (educators, writers, professionals in related disciplines), news organizations (accredited media outlets), nonprofit organizations (health interest groups, patient support organizations, health education institutions) and for-profit organizations (commercial entities like pharmaceutical companies, health device manufacturers, health product sellers). Video characteristics came directly from video pages, while publisher information was based on self-reported data from official account homepages.
Quality assessment tools
This study analyzed stroke-related educational videos from two dimensions: information quality and content completeness.
For information quality assessment, this study selected the Health on the Net Code (HONCode) and the Patient Education Materials Assessment Tool (PEMAT-A/V) to assess video information quality. HONCode was chosen because its eight specific indicators enable multidimensional quality analysis of health information, allowing separate assessment of stroke videos’ authoritativeness, transparency, and reference support31,32. The HONCode framework evaluates eight principles: authority, complementarity, privacy protection, attribution, evidence support, transparency, financial disclosure and advertising policy. The specifics of the HONCode principles are presented in Table 2. Videos were classified by scoring 0–2 points as low compliance, 3–5 points as medium compliance and 6–8 points as high compliance, following similar approaches used in studies on idiopathic pulmonary fibrosis and chronic obstructive pulmonary disease33,34.
PEMAT-A/V was chosen as it’s specifically designed for audiovisual materials, making it ideal for evaluating social media video content35,36. It assesses both understandability (questions 1–13) and actionability (questions 14–17), measuring the educational effectiveness of stroke health education videos36,37. PEMAT-A/V has been validated with good reliability and validity, demonstrating strong internal consistency (α = 0.71) and inter-rater reliability38. The tool uses a binary scoring system where 0 represents the criterion not met, 1 represents the criterion met and NA represents not applicable. The final PEMAT score is calculated by dividing the total points awarded by the number of applicable items (excluding NA responses), then multiplying by 100 to obtain a percentage score. For example, if a video scores 8 points out of 10 applicable items, the PEMAT score would be (8/10) × 100 = 80%. Scores above 70% on the PEMAT indicate effective educational content, while scores below 70% suggest unsatisfactory educational effectiveness35. Detailed scoring criteria are presented in Supplementary Table S1.
To evaluate content completeness, this study divided the content into nine key sections including definition, symptoms, etiology, assessment (medical history, physical examination, laboratory tests), treatment, prognosis, complications, risk factors and prevention. Five sections (definition, symptoms, assessment, treatment and risk factors) were directly derived from the National Clinical Guidelines on Stroke published by the Royal College of Physicians, while the other four sections (etiology, prognosis, complications and prevention) were determined by the research team based on video content characteristics39. Each section used binary coding (0 = not mentioned, 1 = mentioned). The specific stroke code book is available in Supplementary Table S2.
Assessment procedure
Before the formal assessment, both evaluators (two neurologists with rich clinical experience) received standardized training on using these tools and conducted pilot testing on 10% of the sample (23 videos) to ensure a consistent understanding of the assessment criteria. Inter-rater reliability during pilot testing demonstrated excellent agreement, with all Kappa coefficients exceeding the predetermined reliability threshold of 0.840.
Two evaluators independently assessed all videos and recorded their results in separate standardized spreadsheets. For HONCode, evaluators independently assigned scores (0 = non-compliance, 1 = compliance) based on predefined criteria for each principle. For PEMAT-A/V, evaluators applied the binary scoring system according to the tool’s standard criteria. For content analysis, researchers constructed a standardized analysis framework, with two evaluators independently analyzing all videos.
To ensure assessment quality, evaluators compared results to identify discrepancies after independent assessment. When disagreements occurred and Kappa values fell below the preset reliability threshold (Kappa ≥ 0.8)40, a psychiatrist specializing in stroke reviewed the videos in question and facilitated consensus discussions. All assessments demonstrated adequate inter-rater consistency, with Kappa coefficients ranging from 0.84 to 0.87 for PEMAT-A/V, 0.82 to 0.85 for HONcode and 0.88 to 0.92 for content analysis.
Statistical analysis
Statistical analyses were performed using SPSS 29.0 with normality assessed by the Shapiro–Wilk test. For non-normally distributed metrics, pairwise comparisons utilized the Mann–Whitney U test, while overall comparisons employed the Kruskal–Wallis test. The research team compared HONCode compliance and PEMAT-A/V scores across platforms and conducted post hoc tests for significant differences. To evaluate relationships between user engagement metrics (likes, Collections, comments, shares) and information quality assessment indicators (PEMAT-A/V understandability and feasibility, HONCode scores, content richness), Spearman’s rank correlation analysis was applied. A significant level of P < 0.05 was maintained for all statistical tests in the analysis.
Ethical considerations
This study analyzes public content on BiliBili, Douyin and Xiaohongshu platforms, adhering to each platform’s service agreements and data usage regulations. The study collects and analyzes voluntarily published content without direct participant recruitment. Given this study’s nature, this study received an ethics committee exemption from the Biomedical Ethics Committee of Southwest Medical University, confirming that analysis of public social media data does not require a comprehensive ethical review. We follow ethical guidelines for social media research by removing all identifying information and presenting results in aggregate and anonymized form to protect user privacy.
Results
Video selection and screening results
After screening, 227 videos were included in the analysis, with 58 from BiliBili, 88 from Douyin and 81 from Xiaohongshu. The screening process is shown in Fig. 1.
Flowchart of filtering stroke-related videos for further analysis. Flowchart depicting the screening process of videos from three social media platforms (BiliBili, Douyin, and Xiaohongshu) using the keyword “脑卒中” (stroke). The diagram shows the exclusion criteria and the final inclusion of 227 videos for data analysis.
Video characteristics and source distribution
Video characteristics varied significantly across platforms (Table 1). BiliBili videos were considerably longer and received higher engagement in collections and shares compared to Douyin and Xiaohongshu (P < 0.001).
Health professionals were the predominant content creators overall (53.3%), with the highest representation on Douyin (77.3%) compared to other platforms. Conversely, general users were more prevalent on BiliBili and Xiaohongshu than on Douyin (P < 0.001). Detailed statistical comparisons are provided in Supplementary Table S3.
Information quality
The data analysis shows that most videos (72.7%) met the medium-level standards of HONCode criteria (principles 3–5), with significant differences observed among platforms (P < 0.001) (Table S4). Douyin performed best in this respect (85.2%), significantly outperforming BiliBili (63.8%) and Xiaohongshu (65.4%). Regarding PEMAT-A/V assessment, statistically significant differences were found among the three platforms for understandability (P = 0.032), with Xiaohongshu scoring lowest (median 0.73), performing worse than BiliBili (median 0.81) and Douyin (median 0.75). In terms of content completeness assessment, significant differences also existed among platforms (P < 0.001), with Xiaohongshu scoring lowest (median 2.00), inferior to BiliBili (median 3.00) and Douyin (median 3.00). Detailed pairwise statistical comparisons between platforms are provided in Table S4. Visual comparisons of these quality indicators across platforms are presented in Fig. 2.
Information quality assessment across social media platforms. Comparison of information quality assessment indicators across BiliBili, Douyin, and Xiaohongshu platforms. (a) HONCode compliance levels showing the percentage of videos meeting low (0–2 principles), medium (3–5 principles), and high (6–8 principles) compliance standards. (b) PEMAT-A/V median scores for understandability and actionability dimensions across the three platforms.
Table 2 reveals significant differences among the three platforms in adherence to five key HONCode principles. These include principle 1 (health advice must come from professionals, P < 0.001), principle 2 (support patient self-management, P < 0.001), principle 3 (protect patient privacy, P = 0.003), principle 6 (provide contact information, P < 0.001) and principle 7 (identifying contributors, P < 0.001). Detailed pairwise statistical comparisons between platforms for all HONCode principles are provided in Supplementary Table S5.
Content analysis
Table 3 compares stroke-related video content metrics across the three platforms. Overall, treatment was mentioned most frequently (63.0%), followed by symptoms (55.1%), prevention (30.8%) and risk factors (26.9%). Assessment (15.4%) and complications (15.0%) were mentioned least frequently. For the assessment category, BiliBili (22.7%) performed significantly better than Xiaohongshu (4.9%) and Douyin also showed substantially better performance than Xiaohongshu (P = 0.004). Comprehensive statistical comparisons between platforms for all content categories are provided in Supplementary Table S6.
Correlation analysis
Table 4 shows no or minimal relationships between most information quality metrics and audience engagement measures. Overall, actionability demonstrated weak positive relationships with likes (r = 0.141, P = 0.034), favorites (r = 0.207, P = 0.002) and shares (r = 0.204, P = 0.002). On the Xiaohongshu platform, HONCODE criteria showed a weak positive correlation with favorites (r = 0.228, P = 0.041) and a weak negative relationship with comments (r = − 0.244, P = 0.028). Actionability on Xiaohongshu displayed weak positive associations with likes (r = 0.284, P = 0.010), favorites (r = 0.345, P = 0.002) and shares (r = 0.334, P = 0.002). It should be noted that while these correlations reached statistical significance, the effect sizes were generally small (r = 0.141–0.345), representing weak correlations that explain only 2–12% of the variance, indicating limited practical significance.
Discussion
Principal findings
This study evaluated stroke-related videos across three social media platforms BiliBili, Douyin and Xiaohongshu. Videos demonstrated moderate information quality overall, with Douyin’s information quality relatively higher. Most videos lacked proper references and evidence-based support. All platforms exhibited relatively high understandability but limited actionability, with Xiaohongshu scoring lowest in understandability. Regarding content, completeness was insufficient across platforms, particularly on Xiaohongshu. Videos mainly focused on stroke treatment and symptoms, while stroke assessment and complications received less attention. Finally, findings revealed minimal correlation between information quality and user engagement for health videos on social media platforms.
Information quality of health videos across platforms
This study found that stroke-related videos on BiliBili, Douyin, and Xiaohongshu demonstrated moderate overall information quality, similar to previous findings on lymphedema and gastroesophageal reflux41,42. However, our results differ from the unsatisfactory information quality reported in the Helicobacter pylori study on Chinese social media43. This difference may stem from varying assessment methods. The Helicobacter pylori study employed the GQS assessment tool, which provides only a single overall score through five grades, potentially overlooking the multifaceted nature of health information44. In contrast, our study utilized the HONCode tool, which evaluates eight specific dimensions, allowing for a more comprehensive assessment and potentially identifying quality aspects that a single-score tool might miss31. Furthermore, comparative analysis across platforms found that stroke videos on Douyin had significantly higher information quality than those on BiliBili and Xiaohongshu. This phenomenon may stem from the combined effects of multiple factors. First, the study found that healthcare professionals created 77.3% of videos on Douyin, while this proportion was only 20.7% on BiliBili and 50.6% on Xiaohongshu. Previous research on sinusitis and gastric cancer content also indicates that health education videos produced by healthcare professionals typically demonstrate higher information quality45,46. Second, differences in platform content moderation and creator incentive policies may also affect content quality47, and Douyin’s relatively comprehensive medical content verification mechanisms may promote the production of high-quality content48. Third, algorithm design and user demographic characteristics may also contribute to these platform differences. Platform recommendation algorithms may influence the visibility of different content types, potentially affecting how health information from various sources reaches users49,50. Additionally, platforms may attract different user populations with varying demographic characteristics and health information-seeking behaviors51,52, which could influence the types of content creators who choose to engage and the overall content landscape.
Regarding HONcode principles, the three platforms exhibited common characteristics. They performed best on HONcode Principle 1 (requiring health advice in videos to come from qualified professionals unless otherwise stated) but performed relatively unsatisfactorily on Principle 4 (requiring clear references) and Principle 5 (requiring scientific evidence to support claims). This aligns with previous research findings on fall prevention, lymphangioleiomyomatosis, and vitreoretinal surgery53,54,55. The limited citation practices and scientific foundation in these videos may affect content credibility and increase the possibility of sharing less accurate health information.
Videos across platforms exhibited favorable understandability with constraints in actionability, with findings similar to laryngeal carcinoma and diabetes medication studies56,57, but showing variations compared to bladder pain syndrome research58. This variance may stem from disease characteristics. Stroke, as an acute condition with complex pathophysiology, is more challenging to address in short videos compared to chronic conditions like bladder pain syndrome that have established self-management protocols. Stroke content on Xiaohongshu demonstrated lower understandability, showing differences from findings on non-small cell lung cancer information, possibly due to stroke’s interdisciplinary nature and terminology complexity59. Given the stroke’s time-sensitive nature, this gap between understanding and actionability may particularly hinder viewers’ ability to respond effectively in emergencies where rapid intervention is critical. The PEMAT actionability scores in this study (median 0.67) reinforce this concern, indicating that while stroke videos effectively convey information, they fall below the 70% threshold for guiding viewers toward concrete actions. This actionability limitation is particularly significant for stroke care, where the transition from symptom recognition to appropriate response, such as using assessment tools like FAST, seeking emergency care, or implementing preventive behaviors, directly impacts patient outcomes and treatment timing.
This study found that stroke-related content generally lacks evidence transparency and source referencing. Previous research indicates that credible health information should maintain transparency and include reliable source references60,61; accordingly, this study recommends that video creators improve evidence transparency and provide source references for stroke-related content. Additionally, this study found that stroke videos across platforms have limitations in actionability. According to PEMAT standards, actionability refers to patients’ ability to identify specific actions from materials through clear, manageable steps35. This study suggests incorporating specific action guidelines with step-by-step instructions in videos to enhance actionability. Furthermore, this study found lower healthcare professional participation on BiliBili and Xiaohongshu compared to Douyin. Considering that previous research shows professionally created content demonstrates higher quality62, this study encourages greater healthcare professional participation, particularly on BiliBili and Xiaohongshu, to improve the overall content quality of stroke-related videos.
Content platform analysis
From a content integrity perspective, this study found that the content integrity of all three platforms was unsatisfactory. Xiaohongshu’s content integrity was lower than BiliBili and Douyin, possibly because it focuses more on personal experiences and lifestyle sharing. This positioning makes creators more inclined to present certain health information selectively rather than provide comprehensive knowledge needed for disease management. Regarding content distribution, treatment was most frequently mentioned across all three platforms, followed by symptoms, similar to previous research on anal fissures and cryptorchidism63,64. Assessment and complications were mentioned least often. Proper assessment helps patients obtain accurate diagnoses, while understanding complications is essential for informed decision-making65. Patients may underestimate disease severity or ignore symptoms requiring immediate attention66. Therefore, this study recommends that video publishers improve video content integrity, especially on Xiaohongshu, and all platforms should increase content about assessment and complications.
Information quality and engagement correlation
This study found that for stroke-related videos on Bilibili, Douyin, and Xiaohongshu platforms, most information quality indicators (HONCode, PEMAT-A/V Understandability, and Content Completeness) showed no correlation with user engagement, while only a few information quality indicators (PEMAT-A/V Actionability) showed positive correlations with user engagement. However, these significant correlations were weak in strength (r = 0.141–0.345), suggesting limited practical importance. This finding indicates that high-quality health information content does not necessarily receive more user attention and interaction, with information quality having minimal overall influence on user engagement, a finding similar to previous research on laryngeal cancer, liver cancer, and gallstone disease28,52,67. Notably, the actionability of videos on Xiaohongshu showed positive correlations with user engagement indicators. This platform difference may stem from Xiaohongshu’s unique positioning, which emphasizes content practicality and lifestyle sharing, making its users more attentive to video actionability during interaction68.
This study found that high-quality health information content does not necessarily receive more user attention and interaction across platforms, with only Xiaohongshu showing positive correlations between actionability and user engagement due to its platform positioning that emphasizes practical content. Furthermore, previous research demonstrates that visually appealing elements, storytelling techniques, and interactive elements can significantly improve health information acceptance and user engagement69,70,71. Therefore, this study recommends that video publishers adopt visual design improvements, storytelling techniques, and interactive elements to enhance user engagement across all platforms, rather than merely adjusting content according to platform characteristics.
Limitations and future directions
First, our search strategy was limited to a single keyword and did not consider other synonyms (e.g., colloquial terms), which may have affected the comprehensiveness of our results. Our approach may have missed relevant educational content using alternative terminology. Future research should implement multi-keyword search strategies that include synonyms and alternative expressions to provide more comprehensive coverage. Second, this study evaluated platforms in China, which may limit the universal applicability of the results; future research could conduct cross-linguistic and cross-cultural comparisons. Third, our video search was limited to a single date, overlooking the dynamic nature of platform content updates and temporal trends in stroke-related content evolution, which may have resulted in missing important stroke-related content that varies over time. Future research should employ multi-timepoint sampling strategies to more comprehensively capture temporal distribution characteristics and content evolution patterns on these platforms. Fourth, this study did not analyze user engagement data such as comments, which could provide additional insights into content reception. Future research could incorporate other data related to user engagement, such as user comments, to gain comprehensive insights into content reception. Fifth, platform algorithms may influence content ranking and visibility, though we used new accounts to minimize personalization effects. Future research could employ more effective methods to mitigate algorithmic influences.
Conclusion
This study evaluated stroke-related videos on BiliBili, Douyin, and Xiaohongshu platforms. The findings reveal that these videos provide moderate-quality information but generally lack scientific evidence support, with actionability and completeness requiring improvement. Content primarily focuses on treatment while neglecting assessment and complications. Douyin demonstrates higher information quality, while Xiaohongshu shows inadequate performance in understandability and completeness. This study recommends that video publishers enhance scientific evidence transparency, provide reliable references, improve video actionability by incorporating specific action guidelines with step-by-step instructions, and strengthen content completeness by increasing assessments and complications and other related aspects. Healthcare professionals should actively create health education videos across platforms, especially on BiliBili and Xiaohongshu. Video publishers can increase information acceptance by improving visual design, storytelling, and interactive elements rather than merely platform-specific adjustments.
Data availability
Due to platform data policies and privacy concerns, raw short video data are not publicly available, but anonymized content analysis results supporting this study’s conclusions can be obtained from the corresponding author upon reasonable request.
References
GBD 2019 Stroke Collaborators. Global, regional, and National burden of stroke and its risk factors, 1990–2019: a systematic analysis for the global burden of disease study 2019. Lancet Neurol. 20, 795–820. https://doi.org/10.1016/S1474-4422(21)00252-0 (2021).
Feigin, V. L., Norrving, B. & Mensah, G. A. Global burden of stroke. Circ. Res. 120, 439–448. https://doi.org/10.1161/circresaha.116.308413 (2017).
Owolabi, M. O. et al. Pragmatic solutions to reduce the global burden of stroke: a world stroke Organization-Lancet neurology commission. Lancet Neurol. 22, 1160–1206. https://doi.org/10.1016/s1474-4422(23)00277-6 (2023).
Zhao, Y. & Zhang, J. Consumer health information seeking in social media: a literature review. Health Inf. Libr. J. 34, 268–283. https://doi.org/10.1111/hir.12192 (2017).
Zhong, Y., Liu, W., Lee, T. Y., Zhao, H. & Ji, J. Risk perception, knowledge, information sources and emotional States among COVID-19 patients in Wuhan, China. Nurs. Outlook. 69, 13–21. https://doi.org/10.1016/j.outlook.2020.08.005 (2021).
Moorhead, S. A. et al. A new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication. J. Med. Internet Res. 15, e85. https://doi.org/10.2196/jmir.1933 (2013).
Chou, W. Y. S., Oh, A. & Klein, W. M. P. Addressing health-related misinformation on social media. JAMA. 320, 2417–2418. https://doi.org/10.1001/jama.2018.16865 (2018).
Suarez-Lledo, V. & Alvarez-Galvez, J. Prevalence of health misinformation on social media: systematic review. J. Med. Internet Res. 23, e17187. https://doi.org/10.2196/17187 (2021).
Chen, X. et al. Health literacy and use and trust in health information. J. Health Commun. 23, 724–734. https://doi.org/10.1080/10810730.2018.1511658 (2018).
Bilibili Inc. Announces third quarter 2024 financial results. http://ir.bilibili.com (2024).
Business of Apps. TikTok revenue and usage statistics. https://www.businessofapps.com/data/tik-tok-statistics/ (2024).
Chen, Z. & Srijinda, P. The impact of AI-generated content on content consumption habits of Chinese social media users through Xiaohongshu application. Edelweiss Appl. Sci. Technol. 8, 1504–1516. https://doi.org/10.55214/25768484 (2024).
Gao, Q., Abel, F., Houben, G. J. & Yu, Y. A. Comparative study of users’ microblogging behavior on Sina Weibo and Twitter. User Model. User-Adapt Interact. 22, 169–189. https://doi.org/10.1007/978-3-642-31454-4_8 (2012).
Douyin and Kuaishou audiences are increasingly overlapping: report. TechNode. Aug 6. https://technode.com/2019/08/06/douyin-and-kuaishou-audiences-are-increasingly-overlapping-report/ (2019).
Men, L. R. & Tsai, W. H. S. Social messengers as the new frontier of organization-public engagement: A WeChat study. Public. Relat. Rev. 44, 419–429. https://doi.org/10.1016/j.pubrev.2017.10.004 (2018).
Liu, H. et al. Assessment of the reliability and quality of breast cancer related videos on TikTok and bilibili: cross-sectional study in China. Front. Public. Health. 11, 1296386. https://doi.org/10.3389/fpubh.2023.1296386 (2024).
Zeng, F. et al. Douyin and bilibili as sources of information on lung cancer in China through assessment and analysis of the content and quality. Sci. Rep. 14, 20604. https://doi.org/10.1038/s41598-024-70640-y (2024).
Chen, Y. et al. The quality and reliability of short videos about thyroid nodules on bilibili and tiktok: cross-sectional study. Digit. Health. 10, 0552076241288831. https://doi.org/10.1177/20552076241288831 (2024).
Kong, W., Song, S., Zhao, Y. C., Zhu, Q. & Sha, L. TikTok as a health information source: assessment of the quality of information in diabetes-related videos. J. Med. Internet Res. 23, e30409. https://doi.org/10.2196/30409 (2021).
Wang, L., Li, Y., Gu, J. & Xiao, L. A quality analysis of thyroid cancer videos available on TikTok. Front. Public. Health. 11, 1049728. https://doi.org/10.3389/fpubh.2023.1049728 (2023).
Liu, X. et al. TikTok and bilibili as health information sources on gastroesophageal reflux disease: an assessment of content and its quality. Dis. Esophagus. 37, doae081. https://doi.org/10.1093/dote/doae081 (2024).
Guan, J. L. et al. Videos in short-video sharing platforms as sources of information on colorectal polyps: cross-sectional content analysis study. J. Med. Internet Res. 26, e51655. https://doi.org/10.2196/51655 (2024).
Szmuda, T. et al. YouTube as a source of patient information for stroke: a content-quality and an audience engagement analysis. J. Stroke Cerebrovasc. Dis. 29, 105065. https://doi.org/10.1016/j.jstrokecerebrovasdis.2020.105065 (2020).
Özdemir, Z. & Acar, E. YouTube as a source of recognizing acute stroke; progress in 2 years. BMC Public. Health. 24, 2208. https://doi.org/10.1186/s12889-024-19710-4 (2024).
Tunkl, C. et al. Are digital social media campaigns the key to Raise stroke awareness in low-and middle-income countries? A study of feasibility and cost-effectiveness in Nepal. PLoS One. 18, e0291392. https://doi.org/10.1371/journal.pone.0291392 (2023).
Kim, Y., Huang, J. & Emery, S. Garbage in, garbage out: data collection, quality assessment and reporting standards for social media data use in health research, infodemiology and digital disease detection. J. Med. Internet Res. 18, e41. https://doi.org/10.2196/jmir.4738 (2016).
Fernandez-Llatas, C., Traver, V., Borras-Morell, J. E., Martinez-Millana, A. & Karlsen, R. Are health videos from hospitals, health organizations, and active users available to health consumers? An analysis of diabetes health video ranking in YouTube. Comput. Math. Methods Med. 2017, 8194940. https://doi.org/10.1155/2017/8194940 (2017).
Sun, F., Zheng, S. & Wu, J. Quality of information in gallstone disease videos on tiktok: cross-sectional study. J. Med. Internet Res. 25, e39162. https://doi.org/10.2196/39162 (2023).
Mueller, S. M. et al. Fiction, falsehoods, and few facts: cross-sectional study on the content-related quality of atopic eczema-related videos on YouTube. J. Med. Internet Res. 22, e15599. https://doi.org/10.2196/15599 (2020).
Powers, W. J. et al. Guidelines for the early management of patients with acute ischemic stroke: 2019 Update to the 2018 guidelines for the early management of acute ischemic stroke: A guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 50, e344-e418. https://doi.org/10.1161/STR.0000000000000211 (2019).
Boyer, C., Selby, M., Scherrer, J. R. & Appel, R. D. The health on the net code of conduct for medical and health websites. Comput. Biol. Med. 28, 603–610. https://doi.org/10.1016/s0010-4825(98)00037-7 (1998).
Eysenbach, G., Powell, J., Kuss, O. & Sa, E. R. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. JAMA 287, 2691–2700. https://doi.org/10.1001/jama.287.20.2691 (2002).
Goobie, G. C. et al. YouTube videos as a source of misinformation on idiopathic pulmonary fibrosis. Ann. Am. Thorac. Soc. 16, 572–579. https://doi.org/10.1513/AnnalsATS.201809-644OC (2019).
Stellefson, M. et al. YouTube as a source of chronic obstructive pulmonary disease patient education. Chronic Respir. Dis. 11, 61–71. https://doi.org/10.1177/1479972314525058 (2014).
Shoemaker, S. J., Wolf, M. S. & Brach, C. Development of the patient education materials assessment tool (PEMAT): a new measure of understandability and actionability for print and audiovisual patient information. Patient Educ. Couns. 96, 395–403. https://doi.org/10.1016/j.pec.2014.05.027 (2014).
Agency for Healthcare Research and Quality. The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide. https://www.ahrq.gov/health-literacy/patient-education/pemat.html (2020).
Lee, H. J. et al. Assessing of the audiovisual patient educational materials on diabetes care with PEMAT. Public. Health. 167, 82–88. https://doi.org/10.1016/j.puhe.2018.11.023 (2019).
Vishnevetsky, J., Walters, C. B. & Tan, K. S. Interrater reliability of the patient education materials assessment tool (PEMAT). Patient Educ. Couns. 101, 490–496. https://doi.org/10.1016/j.pec.2017.09.003 (2018).
Intercollegiate Stroke Working Party. National Clinical Guidelines for Stroke, 4th edn (The Royal College of Physicians, 2012).
McHugh, M. L. Interrater reliability: the kappa statistic. Biochem. Med. (Zagreb). 22, 276–282. https://doi.org/10.11613/BM.2012.031 (2012).
Zhou, X. et al. The reliability and quality of short videos as health information of guidance for lymphedema: a cross-sectional study. Front. Public. Health. 12, 1472583. https://doi.org/10.3389/fpubh.2024.1472583 (2025).
Liang, Y. et al. Video quality assessment and analysis of gastroesophageal reflux disease on TikTok and bilibili: cross-sectional study. J. Multidiscip. Healthc. 17, 5927–5939. https://doi.org/10.2147/JMDH.S485781 (2024).
Du, R. C., Zhang, Y., Wang, M. H., Lu, N. H. & Hu, Y. TikTok and bilibili as sources of information on Helicobacter pylori in china: a content and quality analysis. Helicobacter 28, e13007. https://doi.org/10.1111/hel.13007 (2023).
Bernard, A. et al. A systematic review of patient inflammatory bowel disease information resources on the world wide web. Am. J. Gastroenterol. 102, 2070–2077. https://doi.org/10.1111/j.1572-0241.2007.01325.x (2007).
Dimitroyannis, R. et al. A social media quality review of popular sinusitis videos on TikTok. Otolaryngol. Head Neck Surg. 5, 1456–1466. https://doi.org/10.1002/ohn.688 (2024).
Hu, R. H. et al. Quality and accuracy of gastric cancer related videos in social media videos platforms. BMC Public. Health. 1, 2025. https://doi.org/10.1186/s12889-022-14417-w (2022).
Huang, Q. et al. The effect of monetary incentives on health care social media content: study based on topic modeling and sentiment analysis. J. Med. Internet Res. 25, e45312. https://doi.org/10.2196/45312 (2023).
Li, C. et al. Quality and educational content of Douyin and TikTok short videos on early screening of rectal cancer. JGH Open. 7, 1005–1012. https://doi.org/10.1002/jgh3.13005 (2023).
Metzler, H. & Garcia, D. Social drivers and algorithmic mechanisms on digital media. Psychol. Sci. 17, 17456916231185057. https://doi.org/10.1177/17456916231185057 (2024).
Song, L. et al. Study on the impact of recommendation algorithms on user perceived stress and health management behaviour in short video platforms. Policy Internet. 60, 1–18. https://doi.org/10.1016/j.ipm.2024.103347 (2024).
Silver, R. A. & Johnson, C. Health information seeking behavior on social networking sites and Self-Treatment: pilot survey study. Online J. Public. Health Inf. 15, e51984. https://doi.org/10.2196/51984 (2023).
Zhang, Y. et al. Online health information seeking behavior: A systematic review. Int. J. Environ. Res. Public Health. 18, 22798. https://doi.org/10.3390/ijerph182222798 (2021).
Kutluturk, I., Aykut, V. & Durmus, E. The use of online videos for vitreoretinal surgery training: a comprehensive analysis. Beyoglu Eye J. 7, 9–17. https://doi.org/10.14744/bej.2022.46338 (2022).
Yang, X. et al. The reliability, functional quality, understandability, and actionability of fall prevention content in youtube: an observational study. BMC Geriatr. 22, 654. https://doi.org/10.1186/s12877-022-03330-x (2022).
Wilkens, F. M. et al. YouTube-videos for patient education in lymphangioleiomyomatosis? Respir. Res. 23, 103. https://doi.org/10.1186/s12931-022-02022-9 (2022).
Liu, Z. et al. YouTube/Bilibili/TikTok videos as sources of medical information on laryngeal carcinoma: cross-sectional content analysis study. BMC Public. Health. 24, 1594. https://doi.org/10.1186/s12889-024-19077-6 (2024).
Zhang, B., Kalampakorn, S., Powwattana, A., Sillabutra, J. & Liu, G. Oral diabetes medication videos on douyin: analysis of information quality and user comment attitudes. JMIR Form. Res. 8, e57720. https://doi.org/10.2196/57720 (2024).
Morra, S. et al. YouTube as a source of information on bladder pain syndrome: a contemporary analysis. Neurourol. Urodyn. 41, 237–245. https://doi.org/10.1002/nau.24802 (2022).
Feng, X., Xu, Y., Yang, Y., Zheng, Y. & Li, J. Assessment of non-small cell lung cancer online videos in china: a cross-sectional study on quality. Health Inf. J. 31, 14604582251328930. https://doi.org/10.1177/14604582251328930 (2025).
Kington, R. S. et al. Identifying credible sources of health information in social media: principles and attributes. NAM Perspect. https://doi.org/10.31478/202107a (2021).
MedlinePlus. Evaluating health information. https://medlineplus.gov/evaluatinghealthinformation.html (2024).
Lobo, E. H. et al. Utilization of social media communities for caregiver information support in stroke recovery: an analysis of content and interactions. PLoS One. 17, e0262919. https://doi.org/10.1371/journal.pone.0262919 (2022).
Chen, Z., Pan, S. & Zuo, S. TikTok and YouTube as sources of information on anal fissure: a comparative analysis. Front. Public. Health. 10, 1000338. https://doi.org/10.3389/fpubh.2022.1000338 (2022).
Sun, Y., Liu, X., Zhang, X., Xu, Q. & Li, A. Health information analysis of cryptorchidism-related short videos: analyzing quality and reliability. Digit. Health. 11, 20552076251317578. https://doi.org/10.1177/20552076251317578 (2025).
Singh, H., Schiff, G. D., Graber, M. L., Onakpoya, I. & Thompson, M. J. The global burden of diagnostic errors in primary care. BMJ Qual. Saf. 26, 484–494. https://doi.org/10.1136/bmjqs-2016-005401 (2017).
Elwyn, G. et al. Shared decision making: a model for clinical practice. J. Gen. Intern. Med. 27, 1361–1367. https://doi.org/10.1007/s11606-012-2077-6 (2012).
Zheng, S. et al. Quality and reliability of liver cancer-related short Chinese videos on TikTok and bilibili: cross-sectional content analysis study. J. Med. Internet Res. 25, e47210. https://doi.org/10.2196/47210 (2023).
Xingin Information Technology. Xiaohongshu official website. https://www.xiaohongshu.com/en?source=official (2024).
Paudyal, P. et al. Storytelling to communicate public health messages during the COVID-19 pandemic: a systematic review. J. Public. Health Res. 11, 22799036221129650. https://doi.org/10.1177/22799036221129650 (2022).
Drew, S. E., Duncan, R. E. & Sawyer, S. M. Visual storytelling: a beneficial but challenging method for health research with young people. Qual. Health Res. 20, 1677–1688. https://doi.org/10.1177/1049732310377455 (2010).
Ngai, C. S. B. et al. Grappling with the COVID-19 health crisis: content analysis of communication strategies and their effects on public engagement on social media. J. Med. Internet Res. 22, e21360. https://doi.org/10.2196/21360 (2020).
Acknowledgements
We extend our sincere gratitude to the two neurologists who participated in the assessment work for this research, as well as the stroke research scholar who provided professional guidance when scoring inconsistencies occurred. Their profound expertise and selfless dedication to time provided a solid foundation for maintaining scientific rigor.
Funding
The authors acknowledge financial support for the research, authorship, and/or publication of this article. This study was supported by the Sichuan Province University Student Innovation and Entrepreneurship Training Plan Project (S202310632129).
Supplementary Information is available for this paper.
Correspondence and requests for materials should be addressed to G.L. (email: shoulderliugang@163.com) or B.L.Z. (email: baby670178@gmail.com).
Author information
Authors and Affiliations
Contributions
K.R.Z. Conceptualized and designed the study, managed the data, conducted formal analysis and investigation, undertook project management, created visualizations, and wrote the original draft. Z.X.L. contributed to writing the original draft, conducted the investigation, and assisted with data management. Y.S.L. contributed to writing the original draft and conducted the investigation. G.L. managed the data and reviewed and edited the manuscript. R.Y.H. and Y.X.G. both participated in reviewing and editing the manuscript. B.L.Z. participated in study conceptualization, assisted with data management and investigation, contributed to methodology design, supervised the entire project, and participated in manuscript review and editing. All authors reviewed the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhang, K., Li, Z., Lin, Y. et al. BiliBili, Douyin and Xiaohongshu as health information platforms for stroke: evaluating information quality and content. Sci Rep 15, 37705 (2025). https://doi.org/10.1038/s41598-025-21535-z
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-21535-z
Keywords
This article is cited by
-
Do likes reflect quality? Engagement metrics and information reliability of robotic surgery short videos
Journal of Robotic Surgery (2026)

