Abstract
The rapid proliferation of healthcare service applications (apps) makes it challenging for consumers to determine the best one for their needs, prompting the Korean government to introduce an accreditation program to verify app safety. This study aims to identify the factors influencing the choice of healthcare service apps among physicians, patients with chronic diseases, and healthy individuals. We conducted a choice-based conjoint analysis with six factors (number of studies on effectiveness, frequency of information delivery, cybersecurity and data safety, user satisfaction, accreditation, and costs). 1,093 participants (407 healthy individuals, 589 patients, and 97 physicians) participated in the online survey. Across all groups, cybersecurity and data safety were the most important preference factors (healthy individuals: β = 2.127, 95% CI 2.096–2.338, patients: β = 1.569, 95% CI 1.481–1.658, physicians: β = 1.111, 95% CI 0.908–1.314). All groups were willing to pay more approximately $12 for high cybersecurity and data safety compared to low.
Similar content being viewed by others
Introduction
With smartphone technology developing rapidly, many healthcare service applications (apps) have penetrated the market1,2,3,4. By using these healthcare apps, people are expected to be able to manage diseases better and achieve their health goals5,6,7,8. According to IQVIA’s Digital Health Trends 2021 report, more than 90,000 healthcare apps were launched globally in 2020, bringing the total number to over 350,0009. Key categories include mental health and behavioral disorders (22%), diabetes (15%), heart and circulatory system (10%), digestive system (8%), and respiratory system (7%). However, despite the increased availability, 83% of the apps were installed fewer than 5000 times9.
The presence and availability of multiple healthcare apps results in people having to seek clarification regarding which ones they should use. Psychological research suggests that people cannot make appropriate decisions when a large amount of information or options are available10,11,12. Similarly, in the healthcare market, people may find it challenging to determine which service is appropriate for a flood of services13,14,15. Several initiatives, such as the National Health Service app library in the UK and the Health Navigator app library in New Zealand, have attempted to address this problem16,17. The Korean government also launched an accreditation program to certify healthcare apps in 2022 to ensure people’s safety by addressing potential risks such as the misuse of healthcare information, provision of inaccurate information, hacking, and privacy breaches. This program has certified twelve healthcare service apps, and the accreditation is expected to be expanded.
For the meaningful use of the accreditation program, verifying the factors people consider when choosing a healthcare service app is essential18,19,20,21. Previous studies have explored these factors in different contexts. For instance, a conjoint analysis conducted in Germany focused on identifying the factors affecting their use of mental health apps22. Similarly, a study in the UK used conjoint analysis to confirm the factors healthcare professionals consider when choosing digital healthcare services23. While gathering opinions from a wide range of stakeholders for effective policymaking is essential, these studies are limited by only surveying healthcare professionals or the public.
This study aims to identify the preferences of physicians, patients with chronic disease, and healthy individuals when selecting healthcare service apps. Previous studies have shown that patients have ‘experiential knowledge’ gained through personal experiences related to their illness and symptoms, while physicians possess ‘professional knowledge’ based on theoretical and scientific understanding24,25. Healthy individuals lack both experiential knowledge and professional expertise. We hypothesize that these differences in knowledge may influence their app preferences. Specifically, physicians with professional knowledge will prioritize the validation of healthcare services effectiveness, chronic patients with experiential knowledge will place importance on user satisfaction, and healthy individuals with limited information will rely on accreditation. Based on the findings of this study, we propose policy implications for the accreditation program to enable more effective performance of healthcare apps.
Results
Characteristics of participants
Overall, as shown in Tables 1, 1093 people participated in the study: healthy individuals (n = 407), patients with chronic disease (n = 589), and physicians (n = 97). For healthy individuals, the number of participants was evenly distributed, as they were allocated proportionally by age and sex. Most patients with chronic disease were in their 40 s (26.8%) and 50 s (31.6%), while most physicians were in their 20 s (34%) and 30 s (62.9%). Both patients with chronic disease and physicians were predominantly male (72% and 93.8%, respectively).
Among the group of patients with chronic disease, the duration of illness was less than five years for 338 patients (57.4%), between five and ten years for 113 patients (22.6%), and more than ten years for 133 patients (22.6%). 419 patients (71.1%) were taking medication, and 389 patients (66.0%) had a family history of hypertension or diabetes. 141 patients (23.9%) experienced complications, including cardiovascular disease (n = 82), cerebrovascular disease (n = 30), neuropathy (n = 23), nephropathy (n = 22), and others (n = 30).
Preference for healthcare service app
The preference for healthcare service apps across all groups is summarized in Table 2. Among healthy individuals, cybersecurity and data safety were considered the most important factors when choosing healthcare service apps (β = 2.217, 95% CI 2.066 to 2.338). The number of studies on effectiveness was also regarded as an important factor (β = 1.678, 95% CI 1.546–1.811). User satisfaction notably influenced app choice (β = 1.118, 95% CI 1.008 to 1.227). In addition, accreditation was found to have a meaningful influence (β = 0.788, 95% CI 0.700 to 0.875). However, they did not prefer providing excessive health information (β = −0.295, 95% CI −0.410 to 0.181).
For patients with chronic disease, cybersecurity and data safety were also the most important factors when choosing healthcare service apps (β = 1.569, 95% CI 1.481 to 1.658). The number of studies on effectiveness was the second most influential factor in their decision-making (β = 1.383, 95% CI 1.282 to 1.484). User satisfaction was also regarded as important (β = 0.866, 95% CI 0.782 to 0.951). In addition, accreditation contributed to app selection (β = 0.717, 95% CI 0.650 to 0.785). However, they did not prefer providing excessive health information (β = −0.216, 95% CI −0.303 to 0.130).
Among physicians, cybersecurity and data safety were considered the most critical factors (β = 1.111, 95% CI 0.908 to 1.314). The number of studies on effectiveness was also highly valued (β = 1.083, 95% CI 0.851–1.315). User satisfaction played an important role in app selection (β = 0.865, 95% CI 0.669–1.062). In addition, accreditation was noted as a contributing factor (β = 0.504, 95% CI 0.345–0.662). Physicians also did not prefer the provision of excessive health information (β = −0.098, 95% CI −0.295–0.100).
WTP for Healthcare Service App
We confirmed the relative preferences based on willingness to pay (WTP) (Table 3, Fig. 1). Healthy individuals were willing to pay more $13.36 (calculated as 2.217 for βcybersecurity and data safety divided by -0.166 for βcost), for high cybersecurity and data safety compared to low. They were also willing to pay more $10.11 for three articles of proven effectiveness compared to none, $6.73 for high user satisfaction compared to low (under 80 s), and $4.75 for an accreditation. Patients with chronic disease were willing to pay more $12.35 for high cybersecurity and data safety, $10.89 for three articles of effectiveness, $6.73 for high user satisfaction, and $5.65 for accreditation. Physicians were willing to pay more $12.92 for high cybersecurity and data safety, $12.59 for three articles of effectiveness, and $10.06 for high user satisfaction, $5.86 for accreditation.
Preferences for key attributes of digital health services were evaluated by calculating the WTP for each feature. WTP was determined by dividing each attribute’s beta coefficient by the cost attribute’s beta coefficient (WTPi = βi / βcost). This calculation estimated how much each group would pay for specific attributes. Healthy individuals, patients with chronic diseases, and physicians were willing to pay more for cybersecurity and data safety: $13.36, $12.35, and $12.92, respectively. They were willing to pay $4.75, $5.65, and $5.86 for accreditation, respectively. Physicians showed a higher willingness to pay for each study on effectiveness ($12.59) compared to healthy individuals ($10.89) and patients with chronic disease ($10.89). Similarly, physicians were willing to pay more for user satisfaction ($10.06) than healthy individuals ($6.73) and patients with chronic disease ($6.82). All groups were negatively willing to pay for information delivery.
Discussion
This study attempted to confirm the preferences of patients with chronic disease, healthy individuals, and physicians when choosing healthcare service apps. The findings demonstrated that cybersecurity and data safety were the most critical attributes for all participant groups, followed by accreditation and studies published on effectiveness. All groups were willing to pay approximately $12 more for high cybersecurity and data safety and about $5 for accreditation. Compared to other groups, physicians were willing to pay more for services backed by published studies on effectiveness ($12.59) and those with high user satisfaction ($10.06).
This study’s results suggest that the policy’s effectiveness can be reinforced by providing more validation of cybersecurity or data safety, which was the highest preference factor across all groups. People take cybersecurity and data safety very seriously because healthcare data is sensitive to personal information that must not be compromised. A survey conducted on the purchase of mobile healthcare apps found that the public considered security and privacy the second most important factors26. Information security was the most important factor in a study examining physicians’ preferences for mHealth services27. Other previous studies have also reported that data security is one factor that increases people’s WTP for digital health28,29. In addition, various preference studies have recognized cybersecurity and data safety as crucial factors in relation to digital healthcare products and information30,31,32,33. In South Korea, a survey conducted by a government agency found that improving digital healthcare requires ensuring the reliability and accuracy of data, protecting personal information, and establishing a robust security system. The results of our study further reinforce and support these previous research findings. The National Healthcare App Accreditation Program in Korea aims to ensure people’s safety when using healthcare service apps. Given the study’s findings, the accreditation program’s goals are well-aligned with people’s needs.
The results of this study also confirm the importance of government accreditation. All three groups showed significant results regarding WTP for mobile healthcare apps with official accreditation. The result means that, given the ever-increasing number of mobile healthcare apps, individuals may struggle to decide which apps to trust, and government certification can serve as a valuable guide in their decision-making process. These findings align with a discrete choice experiment conducted in the United Kingdom, which revealed that physicians prefer mobile health services approved by the National Health Service23. In addition, conjoint analysis or discrete choice experiment studies that found people prefer government-developed digital health products or government-managed digital health data can be understood in the same context as our results30,34,35,36.
We identified differences in the relative importance and WTP for each attribute according to demographic characteristics. Our study found that patients with chronic disease tended to be willing to pay more for healthcare service apps with accreditation compared to healthy individuals. It is presumed that the government certification ensures verified service quality and safety, which likely contributes to the higher WTP among patients with chronic disease than healthy individuals. While not directly focused on accreditation, previous studies highlighting higher WTP among chronic disease patients may reflect similar underlying reasons37,38.
Women were willing to pay more than men for apps with solid cybersecurity and data safety features. This finding is consistent with previous studies showing that women tend to be more cautious in their cybersecurity practices than men39,40. However, in a study conducted in Hong Kong with the public, men showed a stronger preference for products with high security and privacy features than women26. This difference from previous studies could be attributed to the fact that this study’s descriptions additionally included information about potential failures and technical defects. However, further research is needed to understand these findings fully.
Individuals in their 60 s and older had the highest willingness to pay for apps with accreditation. It may be because older people tend to trust accreditation more due to their relatively low digital literacy and lack of information about digital healthcare services41,42. Younger people were relatively willing to pay more for apps with solid cybersecurity and data safety and high user satisfaction than other age groups. Younger people regarded privacy as a more critical consideration in a study investigating preferences for mHealth technologies for managing depression33. Growing up with digital devices from an early age, they conduct many of their activities online, making them likely to be more concerned about cybersecurity and data safety43. Additionally, since they often rely on other users’ ratings and reviews when purchasing products online, user satisfaction with healthcare service apps may play an essential role in their decision-making. Considering this relative importance and WTP, the attributes of healthcare services should be emphasized differently depending on the age of the target consumer44,45.
Consistent with previous research, physicians preferred mobile healthcare apps that demonstrated clinical effectiveness, aligning with findings from earlier studies conducted among physicians in the UK23. In a study examining the preferences of German medical students regarding digital mental health interventions, the scientific evidence base was identified as a significant factor influencing their choices46. Respiratory specialists, Dutch dermatologists, Norwegian general practitioners, and Belgian family doctors also require strong evidence before incorporating adaptive digital health technology into clinical practices47,48,49,50,51. For a healthcare service app to be used effectively, it is necessary to first persuade medical professionals by confirming its effectiveness before they recommend it to their patients.
In this study, the frequency of delivering information was insignificant, whereas certain other factors were significant for people’s preferences. It can be interpreted as users prefer to provide only a little information. Therefore, efforts should be made to provide users with customized rather than excessive information.
This study is meaningful in confirming the preference for healthcare service apps among various groups, such as medical professionals, patients with chronic diseases, and healthy individuals. However, this study has several limitations. First, there may be inherent limitations since our research was based on an online survey. This method may yield different results compared to purchasing situations in a real-world setting. Specifically, we only presented two options in our survey, whereas, in reality, consumers typically compare a wider range of services before deciding. Additionally, because the survey was conducted online, we could not verify whether participants had a medical condition. Second, there are also limitations related to the attributes and levels. People may consider other attributes more critical than those included in this study. It was also challenging to ensure that they fully understood these aspects due to the nature of an online survey, even though we continuously displayed explanations of the attributes and levels and provided 15 seconds for participants to consider them. To address this limitation, including a quiz to confirm their understanding after reading the explanation on attributes and levels would be helpful. It is necessary to help participants understand through sufficient explanation of each attribute and level52. Additionally, we did not provide visuals to assist them, which would have been helpful but was not implemented. Third, patients with chronic disease include only those with high blood pressure and diabetes, whereas others may have different preferences. Regarding the healthy individual group, it is also possible that we did not screen accurately, as we only asked whether they had received a chronic disease-related diagnosis, using hypertension and diabetes as examples. Fourth, the smaller sample size of the physician group (n = 97) may limit this study’s generalizability and comparison robustness. The survey also may not fully represent all South Korean physicians, as it primarily consisted of younger physicians, and not all age groups were included. These factors should be considered when interpreting the results.
In conclusion, digital healthcare technology will continue to evolve, and more healthcare service apps are expected to be developed. For the effective use of these services, users’ preferences must be checked. Governments that manage and authorize these services also need to implement policies based on users’ and providers’ needs.
Methods
Study design
We conducted a choice-based conjoint analysis (CBCA), a method for preference elicitation that uses a survey instrument to force participants to trade-off between attributes, allowing their underlying preferences to be estimated through statistical analysis53,54,55. The basic principle of CBCA is that the profiles that constitute a choice set share the same attributes but are differentiated by the attributes’ levels, which are controlled experimentally. Participants make choices in a series of choice sets, and the pattern of responses allows us to quantify the impact of changes in attribute levels on choice56. There are several types of CBCA, with the two main ones being traditional CBCA, which presents a fixed choice task with a set combination of different product attributes, and Adaptive Choice-Based Conjoint (ACBC), in which the tasks are adjusted based on the respondent’s preferred attributes identified from the initial choice57,58. In this study, we adopted the traditional CBCA approach of presenting fixed-choice tasks. The methodological framework of this study (including instrument, design, deployment, and data analysis) was consistent with ISPOR—the Professional Society for Health Economics and Outcomes Research Good Research Practice for Conjoint Analysis59.
Development of attributes and levels
The attributes and levels in this study were designed based on the evaluation criteria of the ‘National Healthcare App Accreditation Program in Korea.’ The accreditation program divides the types of certifications into three categories (chronic disease management, lifestyle modification management, and simple information delivery) according to the characteristics of mobile applications (see Supplementary Table 2). According to previous research, where many studies have used six attributes, we have also set the number of attributes in our study to six60. There are a total of 14 indicators in the accreditation program; we selected four indicators that are evaluated in all three certification types: (1) the number of studies on effectiveness, (2) frequency of information delivery, (3) cybersecurity and data safety, (4) user satisfaction. We also included two additional variables, ‘accreditation’ and ‘cost’, resulting in six attributes (see Table 4). To determine the levels for the selected attributes, we either directly adopted the indicators from the accreditation program or referred to the levels utilized in previous studies22,23,26,61,62.
The number of studies on effectiveness
This attribute was divided into levels of 0, 1, 2, and 3. In previous studies, the ‘number of studies concerning safety and effectiveness’ levels were set at ‘0, 1, 2, 3, ‘ while the levels for ‘proven effectiveness’ were set as ‘yes, not yet'22,23. In our study, we opted for polytomous levels instead of binary levels to derive more meaningful analyses.
Frequency of information delivery
The levels for the frequency of information delivery were set at 2, 4, and 6 times per month. Due to the lack of prior research on this attribute, these levels were aligned with the evaluation criteria of the Korean accreditation program.
Cybersecurity and data safety
We categorized cybersecurity and data safety into three levels - low, medium, and high - due to the difficulty of assigning a precise numerical score to this attribute. Similarly, previous research has classified the attributes of ‘security and privacy’ into three levels: no security assurance, some assurance, and complete security assurance26.
User satisfaction
User satisfaction is categorized into under 80, between 80–89, and above 90. In previous studies, attributes such as ‘the app has been recommended by other healthcare professionals’ or ‘the ratings of the app’ were used, with levels defined as ‘yes or no’ or ‘3.2, 4.0, 4.8 (out of 5), ‘ respectively23,26. We maintained three levels but a more intuitive and familiar scale to obtain more insightful data, dividing scores in 10 out of 100 increments.
Accreditation
Following previous research, the accreditation level was set as binary, with ‘yes’ or ‘no’ options23.
Cost
Our previous research found that WTP for mobile health services in South Korea ranged from $9.3 to $16.0, depending on service experience and type37. Based on these findings, we categorized the cost levels into multiples of 5, with a minimum value of 0 and a maximum value of 15.
We utilized the software provided by Sawtooth (Utah, USA) to design our CBCA, generating 15 paired questions to present the attributes and levels of the service. Sawtooth is widely recognized in the field of conjoint analysis, allowing us to efficiently model and measure respondents’ trade-offs between different service attributes63. According to previous research, the choices typically used include binary options, where respondents choose between two alternatives, or ternary options, where three alternatives are provided64. Following prior studies, by presenting two pairs of choice, we could manage the cognitive load on respondents while still gathering robust data22,23,26,65.
Participants
An online/mobile survey targeting healthy individuals aged 20 years or older (n = 407), patients with hypertension or diabetes (n = 589), and physicians (n = 97) was conducted. The inclusion criteria for this study were as follows: (1) adults aged 19 years or older; (2) individuals without any diagnosed chronic disease, patients diagnosed with hypertension or diabetes, and physicians with a medical license number; and (3) no difficulty in completing the online questionnaire. Exclusion criteria are (1) those who did not consent to the study and (2) those who lacked the decision-making capacity to complete the online survey.
For healthy individuals and patients with hypertension or diabetes, e-mails were sent to online panel members registered with Gallup Korea (Seoul, Korea) to provide information on the survey outline and to participate if desired. For the healthy individual group, we asked whether they had ever been diagnosed with a chronic disease, using hypertension and diabetes as examples. We only included those who reported no diagnosis. In the case of the patient group, we asked whether they had been diagnosed with hypertension (ICD-10: I10–I15) or diabetes (ICD-10: E10–E14), and those who responded that they had been diagnosed were included in the patient group. Healthy individuals were recruited through proportional allocation based on sex, age, and region in Korea as of 2022. Patients with hypertension or diabetes participated in the survey on a first-come, first-served basis because it was difficult to determine the parameters for proportional allocation.
The physician participants were recruited through an announcement posted on a physician community website with approximately 23,000 registered physicians. The website verifies the license numbers of its members to ensure they are practicing physicians. The website allows physicians to chat, use anonymous message boards, provide lecture content, and post job listings. It also offers a service for surveying physicians, which we used to recruit participants and conduct the survey.
Healthy individuals and patients who participated in the survey received $2.50 (1 USD = 1200 KRW) as incentives, and doctors received $16.67. A survey of healthy individuals and patients with chronic disease was conducted between January 16 and February 2, 2023, and a survey of physicians was conducted between August 10 and 14, 2023. No follow-up reminders were sent to participants.
Online survey procedure
The participants first responded to questions about their demographic characteristics (sex, age, and place of residence). Only the patients then answered additional questions regarding the duration of their illness, medication, family history of hypertension or diabetes, and complications. Healthy individuals and physicians did not answer these questions and proceeded directly to the next step. Next, they read an explanation of mobile-based healthcare services. This service assists individuals in managing their health by using personal data entered through a smartphone, such as steps, weight, and meal records. Based on this data, the service creates personalized exercise and diet plans. It also provides support through regular messages that offer tailored counseling, education, and advice, helping users keep healthy habits and make informed health decisions.
Subsequently, the participants were asked to choose their preferred service from the two paired services, each presenting different levels of a given attribute, as shown in the example in Fig. 2. They completed a total of 15 such questions. To maximize the information collected and reduce interpretation bias, we did not provide an opt-out option and required participants to select one of the two service options. We also enforced a rule requiring participants to spend a minimum of 15 seconds on each question, ensuring they had sufficient time to consider the attributes and levels. The required time was set in consultation with Gallup Korea’s members, recognizing that it is easy for participants to respond quickly without providing adequate consideration in online research. In addition, an explanatory text, as presented in Table 4, was provided to help participants to understand the factors accurately. It was continuously presented at the top of the survey page so they could view it while answering the survey. In addition, to help participants identify the differences in levels for each attribute, we used different font colors to highlight the differences between the two choices. The questionnaire items were randomly rotated and presented to reduce order bias. Gallup Korea developed the survey pages.
In this survey, participants selected one of two service options with differing attributes across 15 separate questions. They had to choose between the two options without the ability to skip or opt-out. A 15-second minimum time limit was enforced for each question to encourage thoughtful consideration of the choices. Descriptions of the attributes were persistently displayed at the top of the page for clarity. The order of the questions was randomized to mitigate the risk of bias from the sequence of questions.
Statistical analysis
For this study, we analyzed participant preferences using a conditional logit model implemented in STATA version 16. We coded all attributes as categorical variables, except for cost, which was treated as continuous to identify trade-offs in WTP for attributes of the healthcare service app23. We estimated a main effect model for each group. A WTP analysis was also conducted to understand how participants are willing to trade one attribute for another. It is crucial in this study because differences in preferences cannot be compared solely based on beta coefficients from a main effects analysis, given that the study involves three different populations. As shown in Eq. (1), the WTP for each attribute was calculated by dividing attribute’s beta coefficient by the cost attribute’s beta coefficient, enabling us to estimate the WTP for specific attributes23.
-
WTPi represents the willingness to pay for attribute i.
-
βi is a beta coefficient for attribute i.
-
βcost is a beta coefficient for the cost attribute.
We further divided into subgroups based on age and gender and calculated each group’s WTP. This analysis reveals how much each subgroup will pay for a particular attribute.
Ethics approval
All procedures were conducted according to ethical standards and the principles outlined in the Declaration of Helsinki. This study was approved by the Institutional Review Board of Yonsei University (4-2022-1517). After reading the explanation page, which included information about the study’s goals, participants, and data storage time, all participants consented. There was no preregistration for this study.
Data availability
The data gathered and examined in this research can be accessed only through the corresponding author, provided that the request is justified and approved by the institutional review board.
References
Moses, J. C. et al. Application of smartphone technologies in disease monitoring: a systematic review. In Healthcare 889 (MDPI, 2021).
Grundy, Q. A review of the quality and impact of mobile health apps. Annu. Rev. Public Health 43, 117–134 (2022).
Galetsi, P., Katsaliaki, K. & Kumar, S. Exploring benefits and ethical challenges in the rise of mHealth (mobile healthcare) technology for the common good: An analysis of mobile applications for health specialists. Technovation 121, 102598 (2023).
Peng, C. et al. Theme trends and knowledge structure on mobile health apps: Bibliometric analysis. JMIR mHealth uHealth 8, e18212 (2020).
Joo, E. et al. Smartphone users’ persuasion knowledge in the context of consumer mHealth apps: qualitative study. JMIR mHealth uHealth 9, e16518 (2021).
Agarwal, P. et al. Assessing the quality of mobile applications in chronic disease management: a scoping review. npj Digit. Med. 4, 46 (2021).
Perret, S. et al. Standardising the role of a digital navigator in behavioural health: a systematic review. Lancet Digit. Health 5, e925–e932 (2023).
Tison, G. H. & Marcus, G. M. Will the smartphone become a useful tool to promote physical activity?. Lancet Digit. Health 1, e322–e323 (2019).
Aitken, M. & Nass, D. Digital health trends 2021: innovation, evidence, regulation, and adoption. Slideshare. URL: https://www.slideshare.net/RicardoCaabate/digital-health-trends-2021-iqvia-global [accessed 2022-06-08] (2021).
Greifeneder, R., Scheibehenne, B. & Kleber, N. Less may be more when choosing is difficult: Choice complexity and too much choice. Acta Psychol. 133, 45–50 (2010).
Chernev, A., Böckenholt, U. & Goodman, J. Choice overload: A conceptual review and meta-analysis. J. Consum. Psychol. 25, 333–358 (2015).
Scheibehenne, B., Greifeneder, R. & Todd, P. M. Can there ever be too many options? A meta-analytic review of choice overload. J. Consum. Res. 37, 409–425 (2010).
Klerings, I., Weinhandl, A. S. & Thaler, K. J. Information overload in healthcare: too much of a good thing?. Z. Evid Fortbild. Gesundhwes. 109, 285–290 (2015).
Wilson, T. D. Information overload: implications for healthcare services. Health Inform. J. 7, 112–117 (2001).
Hall, A. & Walton, G. Information overload within the health care system: a literature review. Health Inf. Lib. J. 21, 102–108 (2004).
Baxter, C., Carroll, J.-A., Keogh, B. & Vandelanotte, C. Assessment of mobile health apps using built-in smartphone sensors for diagnosis and treatment: systematic survey of apps listed in international curated health app libraries. JMIR mHealth uHealth 8, e16741 (2020).
Rawnsley, C. H. “I just think you need to find the right app for the right person”. What do New Zealand mental health clinicians and trainees think about digital mental health tools: a mixed-methods study. In (ResearchSpace@ Auckland, 2022).
Ferretti, A., Ronchi, E. & Vayena, E. From principles to practice: benchmarking government guidance on health apps. Lancet Digit. Health 1, e55–e57 (2019).
Kim, S. Y. et al. Survey for government policies regarding strategies for the commercialization and globalization of digital therapeutics. Yonsei Med. J. 63, S56 (2022).
Unsworth, H. et al. The NICE evidence standards framework for digital health and care technologies–developing and maintaining an innovative evidence framework with global impact. Digit. health 7, 20552076211018617 (2021).
Magrabi, F. et al. Why is it so difficult to govern mobile apps in healthcare?. BMJ Health Care Inform. 26, e100006 (2019).
Phillips, E. A., Himmler, S. F. & Schreyögg, J. Preferences for e-mental health interventions in Germany: a discrete choice experiment. Value Health 24, 421–430 (2021).
Leigh, S., Ashall-Payne, L. & Andrews, T. Barriers and facilitators to the adoption of mobile health among health care professionals from the United Kingdom: discrete choice experiment. JMIR mHealth uHealth 8, e17704 (2020).
Castro, E. M., Van Regenmortel, T., Sermeus, W. & Vanhaecht, K. Patients’ experiential knowledge and expertise in health care: A hybrid concept analysis. Soc. Theory Health 17, 307–330 (2019).
Halloy, A., Simon, E. & Hejoaka, F. Defining patient’s experiential knowledge: Who, what and how patients know. A narrative critical review. Sociol. Health Illn. 45, 405–422 (2023).
Xie, Z. & Or, C. K. Consumers’ preferences for purchasing mhealth apps: discrete choice experiment. JMIR mHealth uHealth 11, e25908 (2023).
Jiang, S. et al. Medical personnel behavior preferences for providing mHealth service in China: A discrete choice experiment. Risk Manag. Healthc. Policy 16, 2405–2418 (2023).
Lupiáñez-Villanueva, F., Folkvord, F. & Abeele, M. V. Influence of the business revenue, recommendation, and provider models on mobile health app adoption: three-country experimental vignette study. JMIR mHealth uHealth 8, e17272 (2020).
Saldarriaga, E. M. et al. Assessing payers’ preferences for real-world evidence in the United States: a discrete choice experiment. Value Health 25, 443–450 (2022).
McDonald, R. L., Skatova, A. & Maple, C. Attitudes towards Sharing Personal Data: a Discrete Choice Experiment. https://doi.org/10.31234/osf.io/qz3yp (2023).
Gupta, R. et al. Consumer views on privacy protections and sharing of personal digital health information. JAMA Netw. Open 6, e231305 (2023).
von Huben, A. et al. Stakeholder preferences for attributes of digital health technologies to consider in health service funding. Int. J. Technol. Assess. Health Care 39, e12 (2023).
Simblett, S. et al. Patient preferences for key drivers and facilitators of adoption of mHealth technology to manage depression: A discrete choice experiment. J. Affect. Disord. 331, 334–341 (2023).
Lim, D., Norman, R. & Robinson, S. Consumer preference to utilise a mobile health app: A stated preference experiment. PloS One 15, e0229546 (2020).
Johansson, J. V. et al. Preferences of the public for sharing health data: discrete choice experiment. JMIR Med. Inform. 9, e29614 (2021).
Biasiotto, R. et al. Public preferences for digital health data sharing: discrete choice experiment study in 12 European countries. J. Med. Internet Res. 25, e47066 (2023).
Lee, J. et al. Willingness to use and pay for digital health care services according to 4 scenarios: results from a national survey. JMIR mHealth uHealth 11, e40834 (2023).
Chua, V., Koh, J. H., Koh, C. H. G. & Tyagi, S. The willingness to pay for telemedicine among patients with chronic diseases: systematic review. J. Med. Internet Res. 24, e33372 (2022).
Anwar, M. et al. Gender difference and employees’ cybersecurity behaviors. Comput. Hum. Behav. 69, 437–443 (2017).
Ameen, N., Tarhini, A., Shah, M. H. & Madichie, N. O. Employees’ behavioural intention to smartphone security: A gender-based, cross-national study. Comput. Hum. Behav. 104, 106184 (2020).
Gualtieri, L., Phillips, J., Rosenbluth, S. & Synoracki, S. Digital literacy: A barrier to adoption of connected health technologies in older adults. Iproceedings 4, e11803 (2018).
Tappen, R. M., Cooley, M. E., Luckmann, R. & Panday, S. Digital health information disparities in older adults: a mixed methods study. J. Racial Ethn. Health Dispar. 9, 82–92 (2022).
Sulaiman, N. S. et al. A Review of Cyber Security Awareness (CSA) Among Young Generation: Issue and Countermeasure. In International Conference on Emerging Technologies and Intelligent Systems 957–967 (Springer, 2021).
Ji, Y.-A. & Kim, H.-S. Scoping review of the literature on Smart Healthcare for older adults. Yonsei Med. J. 63, S14 (2022).
Matthias, K., Honekamp, I., Heinrich, M. & De Santis, K. K. Consideration of sex, gender, or age on outcomes of digital technologies for treatment and monitoring of chronic obstructive pulmonary disease: overview of systematic reviews. J. Med. Internet Res. 25, e49639 (2023).
Vomhof, M. et al. Preferences regarding information strategies for digital mental health interventions among medical students: discrete choice experiment. JMIR Form. Res. 8, e55921 (2024).
Slevin, P. et al. Exploring the barriers and facilitators for the use of digital health technologies for the management of COPD: a qualitative study of clinician perceptions. QJM: Int. J. Med. 113, 163–172 (2020).
Ariens, L. F. et al. Barriers and facilitators to eHealth use in daily practice: perspectives of patients and professionals in dermatology. J. Med. Internet Res. 19, e300 (2017).
Fagerlund, A. J., Holm, I. M. & Zanaboni, P. General practitioners’ perceptions towards the use of digital health services for citizens in primary care: a qualitative interview study. BMJ Open 9, e028251 (2019).
Mutebi, I. & Devroey, D. Perceptions on mobile health in the primary healthcare setting in Belgium. Mhealth 4, 1–6 (2018).
Byambasuren, O., Beller, E. & Glasziou, P. Current knowledge and adoption of mobile health apps among Australian general practitioners: survey study. JMIR mHealth uHealth 7, e13199 (2019).
Ang, I. Y. H. et al. Preferences and willingness-to-pay for a blood pressure telemonitoring program using a discrete choice experiment. NPJ Digit. Med. 6, 176 (2023).
Bridges, J. Stated preference methods in health care evaluation: an emerging methodological paradigm in health economics. Appl. Health Econ. Health Policy 2, 213–224 (2003).
Al-Omari, B., Farhat, J. & Ershaid, M. Conjoint Analysis: A research method to study patients’ preferences and personalize care. J. Pers. Med. 12, 274 (2022).
Chrzan, K. & Orme, B. An overview and comparison of design strategies for choice-based conjoint analysis. Sawtooth Softw. Res. Pap. Ser. 98382, 161–178 (2000).
Hauber, A. B. et al. Statistical methods for the analysis of discrete choice experiments: a report of the ISPOR conjoint analysis good research practices task force. Value health 19, 300–315 (2016).
Chapman, C. N. et al. CBC vs. ACBC: Comparing results with real product selection. In 2009 Sawtooth Software Conference Proceedings 25–27 (2009).
Cunningham, C. E., Deal, K. & Chen, Y. Adaptive choice-based conjoint analysis: a new patient-centered approach to the assessment of health service preferences. Patient: Patient -Center. Outcomes Res. 3, 257–273 (2010).
Bridges, J. F. et al. Conjoint analysis applications in health—a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value health 14, 403–413 (2011).
Marshall, D. et al. Conjoint analysis applications in health—how are studies being designed and reported? An update on current practice in the published literature between 2005 and 2008. Patient: Patient -Center. Outcomes Res. 3, 249–256 (2010).
Ploug, T., Sundby, A., Moeslund, T. B. & Holm, S. Population preferences for performance and explainability of artificial intelligence in health care: choice-based conjoint survey. J. Med. Internet Res. 23, e26611 (2021).
Szinay, D. et al. Understanding uptake of digital health products: methodology tutorial for a discrete choice experiment using the bayesian efficient design. J. Med. Internet Res. 23, e32365 (2021).
Johnson, F. R. et al. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value health 16, 3–13 (2013).
Larsen, A., Tele, A. & Kumar, M. Mental health service preferences of patients and providers: a scoping review of conjoint analysis and discrete choice experiments from global public health literature over the last 20 years (1999–2019). BMC health Serv. Res. 21, 589 (2021).
Ng-Mak, D. et al. Patient preferences for important attributes of bipolar depression treatments: a discrete choice experiment. Patient Prefer. Adherence 12, 35–44 (2018).
Acknowledgements
This research was supported by the SmartTech Clinical Research Center (SCRC), funded by the Ministry of Health and Welfare, Republic of Korea (grant number RS-2023-KH142022).
Author information
Authors and Affiliations
Contributions
Concept and design: J.B.L. and J.H.K. Acquisition, analysis, or interpretation of data: J.B.L., J.H.K., M.G.C, and J.Y.S. Drafting of the manuscript: J.B.L. Critical revision of the manuscript for important intellectual content: J.B.L., J.H.K., and J.Y.S. Administrative, technical, or material support: J.Y.S. Supervision: J.Y.S. All authors read and approved of the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Lee, J., Kim, J.H., Choi, M. et al. A choice based conjoint analysis of mobile healthcare application preferences among physicians, patients, and individuals. npj Digit. Med. 8, 244 (2025). https://doi.org/10.1038/s41746-025-01610-5
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41746-025-01610-5




