Abstract
Cardiovascular disease (CVD) is a leading global health concern with rising morbidity and mortality. Despite medical advancements, effective self-management remains challenging due to patients’ limited health literacy. While digital health interventions offer promising solutions, their efficacy depends on eHealth literacy. The mobile eHealth Literacy Scale (m-eHEALS) was developed to assess this construct, but its validity and applicability in CVD populations require further evaluation. This study evaluated the psychometric properties of the m-eHEALS in CVD patients using Rasch analysis. Patients from cardiology and neurology departments in two tertiary hospitals were consecutively enrolled (February 20–May 4, 2023). Demographic data and m-eHEALS scores were collected. Rasch analysis assessed unidimensionality, item fit, reliability, item difficulty, characteristic curves, and differential item functioning (DIF). A total of 302 patients’ data were analyzed. The scale exhibited good psychometric properties, with a person separation index of 4.02 (person reliability = 0.94) and an item separation index of 8.96 (item reliability = 0.99). The unidimensionality analysis revealed a total explained variance of 70.5%; however, the first component residual eigenvalue was 3.1, suggesting potential multidimensionality. Item fit analysis identified five items (N4, N5, N6, N10, and N11) with misfit statistics outside the acceptable range (Infit MNSQ 0.49–1.55). ICC analysis confirmed good item discrimination for most items, though deviations were observed for N10 and N11. DIF analysis indicated gender-based differences for N12 (0.71 logits harder for females). The m-eHEALS demonstrated strong psychometric properties for assessing eHealth literacy in individuals with CVD, though some items exhibited poor fit with the data. Future research should explore subgroup characteristics to further enhance the scale’s accuracy and broaden its applicability across different patient groups.
Similar content being viewed by others
Introduction
Cardiovascular disease (CVD) remains a leading cause of morbidity and mortality globally1,2, with prevalent cases rising from 601 million in 2020 to 621 million in 2021, and deaths from 12.4 million in 1990 to 20.5 million in 20213. This escalating burden underscores its profound impact on individuals, families, and societies, emphasizing the urgent need for effective management strategies. The World Health Organization (WHO) emphasizes that reducing CVD mortality effectively necessitates improved self-management behaviors. Key strategies include lifestyle modifications, adherence to prescribed medications, and proactive management of risk factors. However, a widespread lack of disease-specific health knowledge significantly impairs patients’ ability to adopt these essential self-care practices4.
Rapid advancements in information technology have transformed healthcare, leading to extensive online health information and innovative digital tools for CVD management5,6,7,8. These tools offer functionalities like personalized plans, real-time monitoring, and educational resources9,10,11. However, many CVD patients struggle to fully leverage these digital resources, primarily due to limited eHealth literacy, defined as the ability to seek, find, understand, appraise, and apply health information from electronic sources12. A survey targeting Chinese CVD patients revealed a qualified eHealth literacy rate of merely 13.91%13. This highlights an urgent need to address this gap, as sufficient eHealth literacy is critical for effective self-management through digital resources. Furthermore, higher eHealth literacy has been linked to improved health outcomes across three domains: psychosocial, chronic disease, and physical outcomes14. Therefore, identifying and utilizing validated and reliable assessment tools is crucial for evaluating and improving eHealth literacy in this population.
Several assessment tools effectively evaluate eHealth literacy15, with the eHealth Literacy Scale (eHEALS) being the most widely utilized general assessment tool16, adapted across multiple languages17,18. Despite its widespread adoption, the rapid evolution of digital health technologies necessitates continuous evaluation and refinement of eHEALS to ensure its relevance and effectiveness19,20. Therefore, employing advanced psychometric methods that accurately assess the scale’s validity and reliability is crucial. Traditionally, classic test theory (CTT) has been widely used for scale evaluation, assuming a linear relationship between variables. However, CTT has limitations, particularly when applied to non-interval data such as Likert scales, commonly used in health literacy assessments21,22. In contrast, item response theory (IRT) approaches, such as Rasch analysis, offer a more precise method for evaluating scale properties, including item difficulty and discrimination, thereby ensuring the scale’s suitability for specific populations23,24,25. Numerous studies evaluating eHEALS using these advanced methods revealed inconsistencies in its factor structure, with some research supporting a one-factor model while others propose a two-factor structure26,27,28. Furthermore, issues with item fit and variability in item difficulty across diverse populations, including Chinese, Italian, and German adaptations, have been identified29,30,31,32. These discrepancies and methodological concerns underscore the necessity for more refined and robust eHealth literacy assessment tools, particularly in diverse contexts.
These limitations highlight the necessity to refine eHealth literacy assessment tools to account for the multidimensional nature of eHealth literacy and its variability across diverse populations. While eHEALS has been foundational, its issues with unidimensional structure and limited cultural adaptability reduce its effectiveness in specific contexts like CVD management. To address these, the mobile version of eHEALS (m-eHEALS)33, based on the e-Health Literacy Theory and Empowerment Theory, was developed. m-eHEALS consists of 12 items across three dimensions—Self-perception, Information Acquisition, and Interactive Evaluation—providing a more comprehensive assessment of eHealth literacy in mobile environments. Specifically, m-eHEALS was culturally adapted for Chinese contexts, leveraging the widespread availability of mobile internet to enhance engagement and accessibility by tailoring features to modern digital behaviors. Although m-eHEALS has primarily been evaluated among younger populations, its broader application, especially among CVD patients, remains underexplored.
Therefore, this study aims to validate the applicability of the m-eHEALS among patients with CVD using Rasch analysis. We focus on a rigorous exploration of the scale’s item fit, thereby enhancing our understanding of its effectiveness in assessing eHealth literacy among these patients.
Methods
Ethical considerations
This study was conducted in accordance with the ethical guidelines set out by the Declaration of Helsinki and was approved by the Medical Ethics Committee of Capital Medical University (Approval No. 2015SY45). Informed consent was obtained from all participants prior to data collection, including consent for the use of their anonymized data in research. To ensure participant privacy, all personally identifiable information, including names, contact details, and unique identifiers, was removed before analysis. Data collection and storage complied with the Personal Information Protection Law of the People’s Republic of China, and all necessary measures were taken to maintain confidentiality and privacy. This study does not involve any identifiable images or data. If such information is included in future research, explicit consent will be obtained and documented.
Study design and participants
This cross-sectional study utilized a consecutive inclusion approach to recruit patients from the neurology and cardiology departments of two tertiary comprehensive hospitals between February 20 and May 4, 2023. These hospitals were selected due to their substantial patient volumes and the diversity of conditions managed, making them optimal settings for the study of patients with CVD. Patients were recruited during routine outpatient visits or inpatient admissions. Trained research staff, who had undergone comprehensive instruction on the study protocol, ethical guidelines, and data collection procedures, approached eligible participants to explain the study’s objectives and obtain informed consent. The training emphasized participant recruitment strategies, voluntary participation, and the maintenance of data confidentiality.
Eligibility for participation required a diagnosis of at least one of the following conditions: ischemic heart disease (ICD-10 code: I 20-I 25), cerebrovascular disease (I 60-I 69), or hypertensive disease (I 10 -I 15). The use of ICD-10 codes ensured diagnostic clarity and standardization across all participants. The inclusion criteria for the study were as follows: (1) patients possessing some level of comprehension, writing, or verbal ability; and (2) their informed consent and voluntary agreement to participate in the study. The exclusion criteria included: (1) individuals with mental illness, cognitive impairment, or intellectual disability confirmed by clinical records; (2) individuals in a coma or acute/critical illness stage indicated by clinical symptoms and vital signs; and (3) individuals with life-threatening diseases like cancer or severe organ failure documented in medical records.
Instrument
m-eHEALS was the primary instrument used in this study. The scale was adapted for the Chinese context by Yingmin Wu and colleagues in 2017, based on Simon’s Empowerment Theory, to ensure its relevance to the linguistic and cultural environment in China33. The full text of the m-eHEALS is provided in Supplementary File 1. This scale comprises three dimensions, including 12 items. Each item is rated on a 5-point Likert scale, ranging from “strongly disagree” (1 point) to “strongly agree” (5 points). The total score range is 12 to 60 points, with higher scores indicating greater eHealth literacy. Specifically, the scale is structured into three dimensions: the self-perception dimension (items 1–3), the information access dimension (items 4–8), and the interaction judgment dimension (items 9–12). The overall Cronbach’s alpha coefficient for the scale in our study was 0.91.
After providing informed consent, participants were asked to complete the m-eHEALS questionnaire. In addition to the scale, we also collected basic demographic information, including age, gender, educational background, individual monthly income, and course of CVD.
Statistical analysis
Data entry, dataset management, and descriptive statistical analyses were performed using the Statistical Package for Social Sciences (SPSS Statistics for Windows, Version 26.0. Armonk, NY: IBM Corp). Initial data validation and cleaning procedures included checks for outliers, duplicates, and missing values, with data integrity ensured by two authors (ZY and LY) independently reviewing the dataset. Cases with missing data were excluded from the analysis using listwise deletion. Categorical variables were described using frequencies and percentages, whereas continuous variables were presented as means ± standard deviations (SD) for normally distributed data or as medians (interquartile range, IQR) for skewed data, as determined by the Shapiro-Wilk test of normality. A significance level of p < 0.05 was considered indicative of meaningful differences for all statistical tests.
Rasch analysis was conducted using Winsteps software (Version 3.72.3) to assess the psychometric properties of the m-eHEALS. Dimensionality was evaluated through Principal Component Analysis (PCA), focusing on the alignment between observed variance and the expected model variance. The eigenvalue of the first residual component, ranging from 1.40 to 2.10, was considered indicative of unidimensionality, supporting the premise that a single factor adequately explains the data. Item and individual fit were examined using Infit and Outfit Mean Square (MNSQ) statistics, with acceptable thresholds set at 0.5 to 1.5 for both metrics. Reliability was evaluated using the person separation index and person reliability, with benchmarks set at 1.5 and 0.7, respectively. A person-item map was generated to visually represent the distribution of item difficulties in relation to the ability levels of respondents. This visualization is a powerful tool for identifying item gaps or clustering in terms of difficulty, which helps ensure that the items appropriately cover the full spectrum of the construct being measured. The person-item map was scrutinized for evidence of underrepresented or redundant item difficulty levels. Furthermore, the Item Characteristic Curves (ICCs) were analyzed to provide insights into how each item discriminates between respondents at varying levels of the latent trait. The ICCs are crucial for understanding the relationship between item difficulty and respondent ability, helping to identify whether the items operate as intended across the entire range of respondent abilities. Differential Item Functioning (DIF) was assessed across gender-based subgroups to evaluate whether any items functioned differently for male and female respondents, which could indicate potential bias in the instrument. DIF analysis ensures that the scale operates equivalently across subgroups, allowing for meaningful comparisons. Logistic regression was used to compare the expected responses for male and female participants at the same level of overall ability, with a contrast of > 0.5 logits considered indicative of significant DIF. This threshold reflects meaningful differences in item performance across subgroups. Items with significant DIF were identified and carefully reviewed to assess their impact on the validity of the scale.
Sample size
Rasch analysis requires an adequately large sample size to ensure stable and reliable estimates of item parameters, particularly when calibrating psychometric instruments. Established guidelines recommend a minimum sample size of 250 participants to achieve reliable parameter estimation and to avoid issues such as null categories or misordered item thresholds, which are more likely to occur with smaller sample sizes34. Linacre (2002) further emphasized that for polytomous items, a minimum sample size of 250 is essential for the accurate estimation of item difficulty parameters. In this study, a total of 302 participants were included, exceeding the recommended threshold and providing sufficient power for reliable parameter estimation35.
Results
Participants
A total of 302 participants with CVD were included in this study (Table 1). The mean age of the participants was 59.3 years (SD = 11.4), with a male predominance of 65.9% and a high proportion of married individuals (85.8%). Most participants (66.9%) had attained senior high school education or less. Additionally, 30.1% of the participants had been diagnosed with CVD for less than one year. Regarding the specific diagnoses within the CVD cohort, 142 (47.0%) had ischemic heart disease, 144 (47.7%) had cerebrovascular disease, and 197 (65.2%) had hypertensive disease. It is important to note that these categories are not mutually exclusive, as many patients presented with co-morbid conditions.
Rasch analysis
Dimensionality
Principal Component Analysis (PCA) results indicated that the total explained variance of m-eHEALS exceeded 50.0%; however, the first residual component eigenvalue was greater than 2.10, suggesting that the dimensions could be further subdivided. As shown in Table 2, each dimension was analyzed as a subscale and compared to the overall effect. The results demonstrated that the unidimensionality of individual dimensions was slightly better than that of the total scale. Notably, the eigenvalue for the Interactive judgment dimension approached 2.10, remaining within the acceptable range, likely reflecting the inherent subjectivity associated with this dimension.
Item fit
The Infit and Outfit Mean Square (MNSQ) statistics were used to evaluate the fit of individual items to the Rasch model. These fit statistics help determine how well each item aligns with the model’s expectations. Infit MNSQ is particularly sensitive to responses from individuals whose ability levels are close to the item’s difficulty, while Outfit MNSQ is more influenced by extreme responses or outliers. Ideal fit values for both Infit and Outfit MNSQ are considered between 0.5 and 1.5. Values outside this range may indicate a misfit, suggesting that the item may not align well with the model and might require further refinement.
This analysis was performed across several dimensions of the m-eHEALS scale(Table 3). In our study, the Infit and Outfit MNSQ values for all items within the self-perception dimension indicated an optimal fit to the model. In the Information Access dimension, item N4 exhibited an Infit MNSQ value above 1.5, signaling a misfit to the model. Item N5 had an outfit MNSQ value below 0.5, though its Infit MNSQ remained within acceptable limits. For item N6 both infit and outfit MNSQ values were below 0.5, suggesting a narrow response range. In the interactive judging dimension, item N10, displayed an infit MNSQ value exceeding 1.5, while item N11 had an outfit MNSQ value below 0.5, indicating deviations from model expectations. Despite these individual discrepancies, the overall, mean Infit and Outfit MNSQ values for the scale (0.98 and 0.97, respectively) suggested that the m-HEALS scale demonstrated a generally good fit to the Rasch model.
Reliability
The scale demonstrated excellent reliability, with a person separation index of 4.02 and a person reliability index of 0.94, indicating strong differentiation between individuals’ eHealth literacy levels. The item reliability index was also high at 8.96 with an item reliability index of 0.99, suggesting consistent item difficulty across the scale. Dimension-specific analyses further supported the construct validity of the scale, with each dimension demonstrating good reliability. The self-perception dimension achieved a person separation index of 2.18 and a reliability index of 0.83. Similarly, the information access dimension reported a person separation index of 2.56 and a reliability index of 0.88. The interactive judgement dimension also showed robust performance, with a person separation index of 2.66 and a reliability index of 0.88.
Item difficulty (hierarchy)
The person-item map (Fig. 1) illustrated the alignment between person ability and item difficulty levels. Easier items, such as N5 and N8, corresponded to lower ability levels, while more challenging items, like N10, were aligned with higher ability levels. Although the items were distributed along the continuum, some gaps in the scale were observed. Notably, the range of person abilities exceeded the span of item difficulties, with the items clustering predominantly in the middle of the scale. This resulted in a lack of items that adequately corresponded to the abilities of individuals at both the higher and lower ends of the ability continuum.
Person-Item Map of the 12 m-eHEALS scale items in the Rasch Analysis (n = 302). This figure illustrates the distribution of persons’ ability levels (left) and the difficulty levels of items (right) on a continuum measured in logit units. Each “#” =3 persons and each “.” =1 to 2 persons; M = Mean persons’ ability or mean items’ difficulty; S = one standard deviation; T = two standard deviations. The vertical line is a continuum representing the measures of persons’ ability (left side) and items’ difficulty (right side), plotted in logit units. The person’s ability and items’ difficulty increase from the bottom to the top.
Item characteristic curve (ICC)
The ICCs (Fig. 2) demonstrated that items N2, N3, N6, N7, and N9 closely aligned with expected curves, indicating well-calibrated levels of difficulty and discrimination. While some items exhibited slight deviations from their expected ICCs, these variations remained within acceptable limits, affirming the scale’s overall robustness in measuring a range of eHealth literacy abilities.
Item characteristic curve of the 12 m-eHEALS scale items in the Rasch Analysis (n = 302). The red curve represents the expected ICC according to the Rasch model, while the blue line depicts the observed ICC. The “X” marks the average of the measurements and scores of the observations in the interval. A good model fit is indicated when the “X” on the blue line is at or very close to the red curve. The green and gray lines represent the two-sided 95% confidence intervals, calculated as approximately ± 1.96 standard errors from the red line in the vertical direction.
Differential item functioning (DIF)
DIF analysis revealed a significant gender-related discrepancy for item N12, with a difference of −0.71 logits, suggesting that women found this item more challenging compared to men.
Discussion
This study represents a novel application of Rasch analysis to assess the psychometric properties of the m-eHEALS among patients with CVD. Through rigorous evaluation, we confirmed the scale’s overall efficacy in capturing subtle differences in eHealth literacy while also identifying key areas for refinement. Specifically, distinct dimensions within the scale were identified, reflecting the unique informational and digital interaction needs of this patient population. This comprehensive evaluation contributes meaningfully to the evolving field of eHealth literacy assessment, highlighting the importance of developing targeted and reliable tools to support CVD management in the digital era.
The dimensionality analysis revealed evidence of potential multidimensionality in the m-eHEALS, as indicated by the first component’s residual eigenvalue exceeding 2.10. This finding suggests that the m-eHEALS may capture multiple aspects of eHealth literacy, consistent with prior research emphasizing its multidimensional nature33.
Most items demonstrated a good fit to the Rasch model, affirming that the m-eHEALS items are generally suitable for assessing perceptions of eHealth literacy in the CVD population. All items within the self-perception dimension aligned well with the model expectations, indicating that most CVD patients recognized the Internet as a highly beneficial resource for accessing health information (item N1). This is likely attributable to the internet’s convenience and abundance, which facilitates access to health information and promotes positive health behaviors36,37. However, while patients may recognize the value of digital resources, their ability to effectively engage with these tools depends heavily on their eHealth literacy skills. In the information access dimension, item N4 exhibited an Infit MNSQ value exceeding 1.5, suggesting a potential misalignment between individuals’ actual abilities to retrieve online resources and their perceived competence in this area. This misalignment may hinder meaningful engagement, as patients who overestimate their skills may struggle to navigate digital platforms effectively, while those who underestimate their abilities may avoid using these resources altogether. Similarly, the Outfit MNSQ value for item N5 was below 0.5, pointing to the presence of a distinct subgroup among CVD patients who demonstrate unique perceptions or behaviors regarding online health resources. Such heterogeneity underscores the need for tailored interventions to improve both eHealth literacy and engagement with digital resources. For item N6, both Infit and Outfit MNSQ values were below 0.5 which may reflect the widespread adoption of smartphones38. This ubiquity may have limited the item’s ability to differentiate effectively between individuals with varying levels of proficiency in accessing online health resources. In the interaction judgement dimension, item N10 exhibited an Infit MNSQ value exceeding 1.5, suggesting a potential divergence between individuals’ self-reported intentions to actively participate in health-related online communities and their actual behaviors. This is consistent with the findings of Nguyen29, who observed that existing scales often fail to include items that comprehensively capture the full range of eHealth literacy, particularly at its extremes. Conversely, item N11 demonstrated an Outfit MNSQ value below 0.5, potentially reflecting challenges in evaluating the quality of online information or inconsistencies between individuals’ actual behaviors and their self-assessed capabilities. This finding is consistent with Paige26, who identified challenges in measuring higher-order eHealth literacy skills, such as effective information management and critical evaluation. These challenges underscore the importance of addressing eHealth literacy as a foundational skill essential for fostering meaningful engagement with digital health resources.
The limited number of items in each dimension may have contributed to increased variability in statistical results, leading to notable fluctuations in MNSQ values for certain items. However, when all items were analysed collectively, the stability of the statistical results improved significantly, attributable to the larger dataset. Despite this, specific items such as N1 and N4, which assess patients’ ability to effectively gather and utilise online health resources, displayed significant deviations from model expectations. These variations may be influenced by factors such as patients’ current health status, proficiency in Internet use, and information filtering capabilities37. These findings underscore the complexity of scale application and highlight the need for a nuanced understanding of item fit issues.
The high-reliability indicators observed at both the person and item levels were noteworthy, demonstrating the scale’s effectiveness in differentiating individuals with varying levels of eHealth literacy. This outcome aligns with previous research26, which similarly reported high reliability in eHEALS. Furthermore, the consistency of item difficulty levels across the scale, is in line with findings from earlier studies31. Reliability analysis further confirmed the scale’s strong overall stability and internal consistency. However, item analysis revealed a moderately challenging distribution of item difficulties, indicating a lack of items suited for both over- and under-rationalizers. This is consistent with the findings of a previous study29, which noted that existing scales often fail to capture the full range of eHealth literacy, particularly at its extremes.
The ICC curves revealed that for certain items, the observed patterns aligned closely with the expected curves, suggesting that the perceived difficulty levels of these items correspond well with the participants’ self-reported eHealth literacy. However, these patterns represent participants’ perceptions of their ability, rather than their actual ability to perform digital health tasks. This limitation highlights the potential disconnect between perceived and actual eHealth literacy, as highlighted in previous research27. Therefore, any inference about individuals’ actual eHealth literacy should be made with caution. In practice, these perceived ability patterns may not always align with real-world performance, as factors such as self-assessment bias or misunderstanding of task difficulty may influence the participants’ perceptions. This aligns with findings from previous research31, which emphasized the importance of distinguishing between self-perceived and actual eHealth literacy in assessments.
The DIF analyses revealed a significant gender-related disparity for item N12, with female experiencing greater difficulty than male, as evidenced by a 0.71-logit difference. While previous studies have generally reported that gender does not significantly influence eHealth literacy39,40, this observed discrepancy may stem from various factors, including differences in sample characteristics, measurement tools, cultural influences, and methodological variations. Previous study also identified gender-based DIF in their validation of the Italian version of eHEALS31. It is important to note that a DIF magnitude of 0.71 logits is often considered to be on the boundary of, or below, the threshold for practical or clinical significance41. Nonetheless, this observed DIF in Item N12 is a concern for ensuring equitable measurement across genders, as it suggests the item may function differently for males and females. Therefore, our findings imply that future applications and refinements of the scale should carefully consider this item’s performance, including its potential revision, or the implementation of specific guidelines to ensure fair and accurate eHealth literacy measurement.
When comparing the m-eHEALS to other established eHealth literacy scales, such as the original eHEALS, the Health Literacy Scale (HLS-EU)42, the eHealth Literacy Assessment (eHLA)43, and the eHealth literacy Index (DHLI)44, the m-eHEALS demonstrates similar reliability and validity in assessing general eHealth literacy. Notably, the m-eHEALS addresses some of the limitations of the original eHEALS, including reducing the severity of ceiling and floor effects, thereby enhancing its ability to capture a broader range of eHealth literacy levels. The eHLA provides a multifaceted assessment by incorporating both health and digital literacy dimensions, which allows for a more comprehensive evaluation, even though at the cost of increased complexity. Similarly, the DHLI has been validated in older populations and is recognized for its user-friendliness and superior ability to differentiate eHealth literacy levels, due to its clear structure and specific item descriptions. Despite these strengths, the m-eHEALS remains a reliable and adaptable tool, particularly suited for mobile health interventions and clinical settings focused on chronic disease management. Its streamlined format facilitates ease of use, though ongoing refinements are recommended to enhance its sensitivity and ensure it effectively measures the multifaceted nature of eHealth literacy across diverse populations.
This study has several limitations. First, while our findings robustly validate the m-eHEALS for the target CVD population, the generalizability of the findings should be interpreted with caution. Our sample was recruited from specific tertiary comprehensive hospitals, and its demographic and clinical profile (including the inherent heterogeneity of CVD subtypes within this setting) may not fully represent all CVD populations across different healthcare levels or geographic regions. Second, as the m-eHEALS primarily assesses patients’ perceptions of their eHealth literacy, rather than their actual ability in using digital health tools, caution is needed when inferring real-world skills. Discrepancies may exist between perceived and actual abilities, which is a known limitation of self-reported measures.
This study significantly advances the psychometric understanding of the m-eHEALS in cardiovascular disease patients, offering critical implications for both clinical application and future research. For clinical practice and public health initiatives, our rigorous validation of the m-eHEALS establishes it as a reliable and valid instrument for assessing eHealth literacy in this vulnerable population. Its mobile-friendly design and confirmed efficacy enable healthcare providers to routinely screen patients, facilitating tailored interventions to enhance self-management and improve adherence to complex CVD treatment plans in the evolving digital health landscape.
Our findings also illuminate several vital avenues for future research. Addressing the revealed psychometric characteristics of the m-eHEALS, including instances of item misfit and potential multidimensionality, is a key area for subsequent work. This research should aim to develop or modify items to more comprehensively capture the nuanced eHealth literacy continuum. Furthermore, the observed gender-related disparity for item N12 emphasizes the need for systematic investigation into demographic and clinical factors influencing eHealth literacy across diverse patient subgroups, and for considering appropriate modifications to the scale itself to ensure equitable measurement. Prioritizing the integration of objective assessments alongside self-reported measures will also bridge the gap between perceived and actual digital health skills. Finally, to ascertain its long-term utility, longitudinal studies are warranted to evaluate the m-eHEALS’ sensitivity to changes in eHealth literacy and its effectiveness in tracking intervention outcomes.
Conclusion
In conclusion, this study highlighted areas for improvement while providing robust evidence in favour of the validity and reliability of the m-eHEALS among people with CVD. By providing robust evidence of its validity and reliability, we have identified key areas for improvement that will enhance the scale’s accuracy and applicability for this specific patient group. Addressing these areas has the potential to better meet the unique eHealth literacy needs of CVD patients, ultimately improving the assessment and delivery of patient-centered care in the digital era. Such advancements are expected to lead to more targeted and effective healthcare interventions for this significant population. Future research should build on these findings to further refine and validate the scale, potentially expanding its application to other populations and settings.
Data availability
The dataset used and analyzed in this study is available from the corresponding author upon reasonable request.
References
Roth, G. A. et al. Global burden of cardiovascular diseases and risk factors, 1990–2019: update from the GBD 2019 study. J. Am. Coll. Cardiol. 76 (25), 2982–3021 (2020). PubMed PMID: 33309175; PubMed Central PMCID: PMCPMC7755038.
Wang, Z., Ma, L., Liu, M., Fan, J. & Hu, S. Summary of the 2022 report on cardiovascular health and diseases in China. Chin. Med. J. 136 (24), 2899–2908. https://doi.org/10.1097/cm9.0000000000002927 (2023). Epub 2023/11/29.
Lindstrom, M. et al. Global burden of cardiovascular diseases and risks collaboration, 1990–2021. 80 (25), 2372–2425 (2022) https://doi.org/10.1016/j.jacc.2022.11.001.
Puijk-Hekman, S., van Gaal, B. G., Bredie, S. J., Nijhuis-van der Sanden, M. W. & van Dulmen, S. Self-Management Support Program for Patients With Cardiovascular Diseases: User-Centered Development of the Tailored, Web-Based Program Vascular View. JMIR Res. protocols 6(2), e18 (2017). Epub 2017/02/10, https://doi.org/10.2196/resprot.6352. PubMed PMID: 28179214; PubMed Central PMCID: PMCPMC5322199.
Awrahman, B. J., Aziz Fatah, C. & Hamaamin, M. Y. A review of the role and challenges of big data in healthcare informatics and analytics. Computational intelligence and neuroscience. 2022;2022:5317760. Epub 2022/10/11 https://doi.org/10.1155/2022/5317760. PubMed PMID: 36210978; PubMed Central PMCID: PMCPMC9536942.
Lear, S. A. et al. Assessment of an interactive digital health–based self-management program to reduce hospitalizations among patients with multiple chronic diseases: a randomized clinical trial. JAMA Network Open 4 (12), e2140591-e (2021). https://doi.org/10.1001/jamanetworkopen.2021.40591%J. JAMA Network Open.
Liu, Y. Y. et al. Effectiveness of internet-based self-management interventions on pulmonary function in patients with chronic obstructive pulmonary disease: A systematic review and meta-analysis. J. Adv. Nurs. 79 (8), 2802–2814. https://doi.org/10.1111/jan.15693 (2023). Epub 2023/05/04.
Miao, Y. et al. Effectiveness of eHealth interventions in improving medication adherence among patients with cardiovascular disease: systematic review and meta-analysis. J. Med. Internet. Res. 26, e58013. https://doi.org/10.2196/58013 (2024).
Zhu, Y., Zhao, Y. & Wu, Y. Effectiveness of mobile health applications on clinical outcomes and health behaviors in patients with coronary heart disease: A systematic review and meta-analysis. Int. J. Nurs. Sci. 11 (2), 258–275 (2024). PubMed PMID: 38707688; PubMed Central PMCID: PMCPMC11064579.
Cruz-Cobo, C. et al. Efficacy of a mobile health app (eMOTIVA) regarding compliance with cardiac rehabilitation guidelines in patients with coronary artery disease. Randomized Controlled Clin. Trial. 12, e55421. https://doi.org/10.2196/55421 (2024). Epub 25.7.2024.
Cruz-Ramos, N. A. et al. mHealth apps for Self-Management of cardiovascular diseases: A scoping review. Healthc. (Basel Switzerland) 10 (2), (2022). Epub 2022/02/26. https://doi.org/10.3390/healthcare10020322 (2022). PubMed PMID: 35206936; PubMed Central PMCID: PMCPMC8872534.
Norman, C. D. & Skinner, H. A. eHealth literacy: essential skills for consumer health in a networked world. J. Med. Internet. Res. 8 (2), e9. https://doi.org/10.2196/jmir.8.2.e9 (2006). Epub 2006/07/27.
Zhao, Y. H., Chang, H. & Wu, Y. Current status and related factors of eHealth literacy in patients with cardiovascular and cerebrovascular diseases. Chin. J. Mod. Nurs. 30 (25), 3480–3486. https://doi.org/10.3760/cma.j.cn115682-20231224-02768 (2024).
Yuen, E. et al. Digital Health Literacy and Its Association With Sociodemographic Characteristics, Health Resource Use, and Health Outcomes: Rapid Rev. Interact. J. Med. Res. 13, e46888 (2024). Epub 2024/07/26. doi: 10.2196/46888. PubMed PMID: 39059006; PubMed Central PMCID: PMCPMC11316163.
Gao, Y. et al. Evaluation of the Chinese version of the electronic health literacy scale by the COSMIN operational guidelines. Chin. J. Gerontology(09). 40 (09), 1968–1973. https://doi.org/10.3969/j.issn.1005-9202.2020.09.054 (2020).
Norman, C. D. & Skinner, H. A. eHEALS: the eHealth literacy scale. J. Med. Internet. Res. 8 (4), e27. https://doi.org/10.2196/jmir.8.4.e27 (2006). Epub 2007/01/11.
Guo, S. J. et al. eHEALS Health Literacy Scale Chineseization and Applicability Exploration. China Health Education(02) ;29(02):106–108. doi: https://doi.org/10.16168/j.cnki.issn.1002-9982.2013.02.019. (2013).
Chung, S., Park, B. K. & Nahm, E. S. The Korean eHealth literacy scale (K-eHEALS): reliability and validity testing in younger adults recruited online. J. Med. Internet. Res. 20 (4), e138. https://doi.org/10.2196/jmir.8759 (2018). Epub 2018/04/22.
Lee, J., Lee, E. H. & Chae, D. eHealth literacy instruments: systematic review of measurement properties. J. Med. Internet. Res. 23 (11), e30644 (2021). Epub 2021/11/16. doi: 10.2196/30644. PubMed PMID: 34779781; PubMed Central PMCID: PMCPMC8663713.
van der Vaart, R., Drossaert, C. H., de Heus, M., Taal, E. & van de Laar, M. A. Measuring actual eHealth literacy among patients with rheumatic diseases: a qualitative analysis of problems encountered using health 1.0 and health 2.0 applications. J. Med. Internet. Res. 15 (2), e27. https://doi.org/10.2196/jmir.2428 (2013). Epub 2013/02/13.
Hambleton, R. K., Robin, F. & Xing, D. Item response models for the analysis of educational and psychological test data. Handb. Appl. Multivar. Stat. Math. Model. 553–581 https://doi.org/10.1016/b978-012691360-6/50020-3 (2000).
Embretson, S. E. & Reise, S. P. Item response theory: Psychology Press (2013).
Wright, B. D. & Masters, G. N. Rating scale analysis: MESA press (1982).
Tesio, L. Measuring behaviours and perceptions: Rasch analysis as a tool for rehabilitation research. J. Rehabil. Med. 35 (3), 105–115 (2003). Epub 2003/06/18. doi: 10.1080/16501970310010448. PubMed PMID: 12809192.
Liu, J. et al. Rasch analysis of Morse fall scale among the older adults with cognitive impairment in nursing homes. Geriatr. Nurs. 56, 94–99. https://doi.org/10.1016/j.gerinurse.2024.02.004 (2024).
Paige, S. R., Krieger, J. L., Stellefson, M. & Alber, J. M. eHealth literacy in chronic disease patients: an item response theory analysis of the eHealth literacy scale (eHEALS). Patient Educ. Couns. 100 (2), 320–326 (2017). PubMed PMID: 27658660; PubMed Central PMCID: PMCPMC5538024.
Stellefson, M. et al. Reliability and validity of the Telephone-Based eHealth literacy scale among older adults. Cross-Sectional Surv. 19 (10), e362. https://doi.org/10.2196/jmir.8481 (2017). Epub 26.10.2017.
Richtering, S. S. et al. Examination of an eHealth literacy scale and a health literacy scale in a population with moderate to high cardiovascular risk: Rasch analyses. PloS One. 12 (4), e0175372. https://doi.org/10.1371/journal.pone.0175372 (2017).
Nguyen, J. et al. Construct validity of the eHealth literacy scale (eHEALS) among two adult populations. Rasch Anal. 2 (1), e24. https://doi.org/10.2196/publichealth.4967 (2016). Epub 20.05.2016.
Ma, Z. & Wu, M. The psychometric properties of the Chinese eHealth literacy scale (C-eHEALS) in a Chinese rural population: Cross-Sectional validation study. J. Med. Internet. Res. 21 (10), e15720. https://doi.org/10.2196/15720 (2019). Epub 2019/10/24.
Diviani, N., Dima, A. L. & Schulz, P. J. A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods. J. Med. Internet. Res. ;19(4):e114. (2017). Epub 2017/04/13. doi: https://doi.org/10.2196/jmir.6749. PubMed PMID: 28400356; PubMed Central PMCID: PMCPMC5405289.
Juvalta, S., Kerry, M. J., Jaks, R., Baumann, I. & Dratva, J. Electronic health literacy in Swiss-German parents: Cross-Sectional study of eHealth literacy scale unidimensionality. J. Med. Internet. Res. 22 (3), e14492. https://doi.org/10.2196/14492 (2020). Epub 2020/03/14.
Wu, Y. M., Zhang, M. E. & Zhu, L. Y. Consumer e-health literacy assessment and its application in mHealth. Consumer Econ. (01). 33 (01), 90–96 (2017).
Chen, W. H. et al. Is Rasch model analysis applicable in small sample size pilot studies for assessing item characteristics? An example using PROMIS pain behavior item bank data. Qual. Life Research: Int. J. Qual. Life Aspects Treat. Care Rehabilitation. 23 (2), 485–493. https://doi.org/10.1007/s11136-013-0487-5 (2014). Epub 2013/08/06.
Linacre, J. M. Optimizing rating scale category effectiveness. J. Appl. Meas. 3(1), 85–106 (2002).
Hansen, A. H. et al. Inequalities in the use of eHealth between socioeconomic groups among patients with type 1 and type 2 diabetes: Cross-Sectional study. J. Med. Internet. Res. 21 (5), e13615 (2019). /13615. PubMed PMID: 31144669; PubMed Central PMCID: PMCPMC6658320.
Ramadas, A., Chan, C. K. Y., Oldenburg, B., Hussein, Z. & Quek, K. F. Randomised-controlled trial of a web-based dietary intervention for patients with type 2 diabetes: changes in health cognitions and glycemic control. BMC Public. Health. 18 (1), 716. https://doi.org/10.1186/s12889-018-5640-1 (2018). PubMed PMID: 29884161; PubMed Central PMCID: PMCPMC5994015Epub 2018/06/10.
Montag, C., Lachmann, B., Herrlich, M. & Zweig, K. Addictive features of social media/messenger platforms and freemium games against the background of psychological and economic theories. Int. J. Environ. Res. Public Health. 16 (14), (2019). Epub 2019/07/26. https://doi.org/10.3390/ijerph16142612 PubMed PMID: 31340426; PubMed Central PMCID: PMCPMC6679162.
Duan, Y. W., Chen, M. Y. & Lu, M. M. Study on e-health literacy and influencing factors of elderly patients with coronary heart disease. Shanghai Nursing(11). 22 (11), 37–40 (2022).
Ma, J., Mai, L. X., Huang, Q., Wen, H. & Zeng, Z. Y. A study on the current status of e-health literacy and its influencing factors among people at high risk of cardiovascular diseases in Qingxiu district. Nanning City J. Guangxi Med. Univ. (02). 40 (02), 321–326. https://doi.org/10.16190/j.cnki.45-1211/r.2023.02.022 (2023).
Rouquette, A., Vanhaesebrouck, A., Hardouin, J., Sébille, V. & Coste, J. Differential item functioning (DIF) in composite health measurement scale: recommendations for characterizing DIF with meaningful consequences within the Rasch model framework. PloS One 14 https://doi.org/10.1371/journal.pone.0215073 (2019).
Bergman, L., Nilsson, U., Dahlberg, K., Jaensson, M. & Wångdahl, J. Validity and reliability of the Swedish versions of the HLS-EU-Q16 and HLS-EU-Q6 questionnaires. BMC Public. Health. 23 (1), 724. https://doi.org/10.1186/s12889-023-15519-9 (2023).
Karnoe, A., Furstrand, D., Christensen, K. B., Norgaard, O. & Kayser, L. Assessing competencies needed to engage with digital health services: development of the eHealth literacy assessment toolkit. J. Med. Internet. Res. 20 (5), e178. https://doi.org/10.2196/jmir.8347 (2018). Epub 2018/05/12.
Xie, L. & Mo, P. K. H. Comparison of eHealth literacy scale (eHEALS) and digital health literacy instrument (DHLI) in assessing electronic health literacy in Chinese older adults: A Mixed-Methods approach. Int. J. Environ. Res. Public Health. 20 (4) https://doi.org/10.3390/ijerph20043293 (2023). Epub 2023/02/26.
Author information
Authors and Affiliations
Contributions
The study was conceptualized by ZY, LY, and WY. Data curation was performed by ZY, MY, LJ, HQ, and ND. Formal analysis was conducted by ZY. The methodology was devised by ZY, LY, and WY. Supervision was provided by WY. Validation was carried out by ZY, LY, and WY. The original draft was written by ZY and LY, and the paper was reviewed and edited by WY.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhao, Y., Luo, Y., Miao, Y. et al. Rasch analysis of the mobile version of the eHealth literacy scale for cardiovascular disease patients. Sci Rep 15, 34093 (2025). https://doi.org/10.1038/s41598-025-14198-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-025-14198-3