Introduction

Digital phenotyping (DP) refers to the moment-by-moment quantification of the individual-level human phenotype using data from personal digital devices such as smartphones and wearables1. It involves the collection and analysis of behavioural and physiological data to generate insights into an individual’s mental and physical states in real-time1,2,3. DP has gained significant interest for use in mental health care2,4,5,6. By leveraging wearable devices and smartphones, DP offers real-time insights into individuals’ health, enabling the detection of subtle changes in mental and physical states that were previously difficult to detect7,8,9. This technique shows high sensitivity in detecting early signs of mental illness6 and can help predict relapse using smartphone data days before they become clinically apparent10,11,12,13,14. Recent work has even suggested that DP ‘could support gold-standard assessment and…predict symptom exacerbations6. This offers particular promise, particularly in mental health care, as early intervention can dramatically improve outcomes14,15 for conditions such as depression16,17,18, anxiety17,19,20,21,22,23, and serious mental illnesses such as psychotic disorders7,8,16,17,18,19,20.

Despite its potential, DP faces critical technical challenges and usability barriers that undermines its reliability and scalability24,25. These challenges are compounded by the absence of standardisation in methodologies, which results in variability across platforms and studies, limiting the reproducibility and generalisability of findings. In this Perspective, we outline these challenges and propose strategies for developing universal frameworks and protocols, to enable more reliable, scalable and impactful applications of DP.

Addressing technical challenges

Battery life and power consumption

One of the primary technical challenges of sensor-based data collection is battery life25. Wearable devices and smartphones rely on continuous power to collect and transmit data25 and data such as GPS tracking26,27,28, accelerometers29, and continuous heart rate monitoring30,31,32 consume significant energy25. At a refresh rate of 1 Hz (the number of times the screen updates per second), smartphones experience rapid battery drainage, with Samsung devices lasting approximately six hours and iPhones lasting about 5.5 h28. Location services such as GPS tracking consume approximately 13% of battery life when operating with a strong signal but in areas with weak signal strength, battery consumption can significantly increase, reaching up to 38%27,33. Using accelerometer-based continuous sensing apps (CSAs) such as Google Fit34 increases battery consumption, particularly during high-mobility activities such as jogging or excercise29. Walking or other mobility activities such as running can increase battery consumption by up to 3–4 times29,35. Day-long experiments demonstrated how smartphones running CSAs experienced increase in battery consumption up to three times higher than those without such apps, particularly during physical activity29. Significant battery drainage is also seen when using photoplethysmography in digital devices36 for heart rate monitoring30,31,32. Continuous heart rate monitoring requires high energy due to frequent processing and wireless data transmission to remote servers31. This limits smartphone uses in real-world scenarios to approximately 9 h on average, which inconveniences users as they must recharge their devices during the day31. Similarly for wearables, due to data transmission requirements, there is significant battery drainage when using heart rate monitoring30,32. This limits their utility in long-term studies or real-time monitoring scenarios, as frequent recharging can disrupt data collection and affect user compliance24,25.

One approach to improve energy efficiency is adaptive sampling, which dynamically adjusts the frequency of sensor data collection based on user activity37. This reduces unnecessary power consumption by lowering the sampling rate when the user is stationary and increasing it only during movement38,39. Another strategy is sensor duty cycling, which alternates between low-power sensors, such as accelerometers, and high-power sensors, like GPS and heart rate monitors40. By activating power-intensive sensors only, when necessary, duty cycling conserves battery life without compromising data quality41. Additionally, the development of low-power wearable devices, leveraging energy-efficient chipsets42, Bluetooth Low Energy (BLE)43 and hardware-based power management algorithms, enables prolonged monitoring while reducing the frequency of recharging42,44. These innovations can allow DP applications to minimise battery drain, enhance usability and improve participant compliance in long-term studies.

Furthermore, researchers may strategically prioritise use and choice of sensors based on study aims and resource constraints. For instance, short-term studies that focus on movement may prioritise IMU sensors, while long-term studies assessing autonomic function may rely on intermittent heart rate variability (HRV) sampling.

Device selection is another critical consideration. For example, the Polar H10 chest strap is known for accurate HRV data collection with excellent battery life (up to 400 h)45,46, while the ActiGraph GT9X offers reliable IMU data with long-term battery support suitable for week-long recordings47. Wrist-worn devices such as Fitbit Charge 5 balance HR monitoring with moderate battery life, approximately 7 days, but may offer lower data granularity. Selecting devices with built-in power-saving modes or configurable sampling rates can optimise both data quality and battery performance48,49.

By combining hardware-efficient design, adaptive sampling and intentional feature prioritisation, digital phenotyping studies can maintain a balance between data richness and battery feasibility. These decisions should be guided by the specific use case, data fidelity requirements and the anticipated level of participant engagement.

Device compatibility and app development

The heterogeneity of devices and operating systems presents another technical hurdle50. Smartphones and wearables come from various manufacturers, each with unique hardware configurations and software ecosystems25,50, leading to inconsistencies in data collection and integration, as certain devices may not support specific sensors or data formats25,50. For example, some data collection applications for DP only work on iOS51,52,53 or Android54, which excludes many participants and data from studies.

Beyond hardware and software differences, the choice between cross-platform and native app development further influences data collection reliability55. Cross-platform development allows applications to run on multiple operating systems using a single codebase, leveraging frameworks such as React Native, Flutter, or Xamarin55. While this approach improves accessibility and reduces development time, it often comes at the cost of performance and customisation. In contrast, native development involves building applications specifically for a single platform or operating system (e.g., Swift for iOS or Kotlin for Android), allowing deeper integration with system-level features and optimised performance55,56. Given that DP applications rely heavily on sensor-based data collection and real-time processing57, cross-platform solutions may not be the most suitable approach. Native development provides greater control over data handling, seamless integration with platform-specific health APIs, and optimised input/output (I/O) operations, making it a more reliable choice for applications requiring continuous data monitoring and precise hardware interaction56.

Recent advances in Generative AI (GenAI), particularly large language models (LLMs) and diffusion-based architectures, offer new opportunities for enhancing DP. GenAI can support the automated synthesis and contextual understanding of unstructured behavioural data such as speech, social media, journaling and passive text inputs58,59,60. For example, LLMs such as Generative Pretrained Transformers (GPT) and Bidirectional Encoder Representations from Transformers (BERT) variants have been shown to detect depressive or anxious language patterns with high sensitivity61,62. In clinical research, fine-tuned GenAI models can assist in generating individualised behavioural baselines, summarising daily mood reports, or simulating realistic synthetic data for rare psychiatric presentations63,64,65. Additionally, generative models can support just-in-time adaptive interventions (JITAIs) by tailoring mental health content or therapeutic prompts based on real-time sensor input and user preferences66. In DP applications for low-resource settings, GenAI can improve accessibility through natural language generation in regional languages and simplification of app interfaces for low-literacy users67. Careful benchmarking, human oversight and ethical safeguards are necessary for its responsible deployment in DP.

Interoperability is a critical area for innovation. The development of open-source frameworks and standardised APIs can facilitate seamless integration of data across various devices and platforms, fostering collaborative research and scalability59,68. Additionally, AI-powered natural language processing (NLP) and sentiment analysis can unlock new dimensions of behavioural insights by analysing voice and text data69,70. These advancements can significantly enhance the ability to detect subtle changes in mental health status.

To ensure inclusivity, technological development must prioritise accessibility. This involves designing energy-efficient devices with lower costs and user-friendly interfaces, making DP feasible for diverse populations, including those in resource-constrained settings.

Cross-platform interoperability is crucial for integrating data from various devices and applications. Currently, many wearables and apps operate within proprietary ecosystems, limiting their ability to share data seamlessly25,71,72. The use of Application Programming Interfaces (APIs) and Software Development Kits (SDKs) offers a practical solution73. APIs allow different software applications to communicate with one another, while SDKs enable developers to create compatible tools and features for existing platforms59,68,73. For example, Apple HealthKit and Google Fit provide APIs that facilitate data integration from multiple sources, but broader adoption and further refinement are needed to ensure comprehensive interoperability73. However, caution is warranted when using data extracted from such APIs and SDKs. These data are often pre-processed by the platform providers, and changes in preprocessing algorithms over time can lead to discrepancies even in historical data74. For instance, identical data exported at different time points can yield different outputs due to back-end updates, as reflected in metadata or timestamp inconsistencies75. This highlights that data from such platforms are not truly raw and should be interpreted with transparency regarding preprocessing pipelines and limitations74. Recent consensus guidelines have emphasised the importance of understanding data provenance and reproducibility in sensor-derived health data75,76. Developers can also leverage cross-platform frameworks such as React Native77,78, which allows the use of JavaScript to build apps for both iOS and Android while maintaining high performance through integration with native components78. Similarly, Flutter79, a toolkit developed by Google, enables developers to create applications for both operating systems, supporting consistent functionality and user experience across platforms80.

Collaboration between industry and academia is also vital for promoting interoperability. Industry stakeholders, including device manufacturers and app developers, must align their technologies with agreed-upon standards59,73. Academic researchers can provide insights into the practical challenges of implementing these standards.

Inconsistent data transmission, storage and security protocols

The lack of robust data transmission and storage solutions presents critical challenges as well25,57,81,82. DP often involves the real-time transmission of substantial data volumes, which can overwhelm existing network infrastructures57. For instance, studies have shown that high-frequency data collection from wearable devices, such as electrodermal activity or heart rate monitors, can generate datasets exceeding gigabytes daily, especially when combined with continuous geolocation tracking and other sensor data29,32,83. This high volume of data often exceeds the capabilities of low-bandwidth networks, resulting in signal loss and transmission failures84.

Inadequate storage systems, particularly those lacking scalability, struggle to handle the exponential growth of health-related data81. This challenge is compounded by insufficient encryption protocols, which leave sensitive health information vulnerable to breaches81,82. A case study in India on the use of mobile health apps found that insecure storage mechanisms led to unauthorised access to patient data, raising significant concerns about privacy and confidentiality85. Similarly, research on mental health monitoring systems noted that improper encryption during data transmission exposed sensitive behavioural and physiological data to interception, undermining both participant trust and the overall reliability of the research86,87.

To address these challenges, Kotlin Multi-Platform (KMP), developed by JetBrains, offers a reliable solution for optimising data transmission88 and integration across multiple platforms89. KMP enables seamless data processing by allowing shared business logic while maintaining native performance for both iOS and Android90. Unlike traditional cross-platform frameworks, which may struggle with customising health data libraries, KMP ensures efficient data handling and security by supporting encrypted data transmission instead of raw data transfer89,90. Additionally, by reducing redundant code and improving database structures across platforms, KMP enhances scalability and interoperability90, making it a suitable choice for DP applications that require secure and high-performance cross-platform compatibility. Labfront, an alternative research-grade platform, offers a low-code environment designed specifically for collecting and analysing physiological and behavioural data from wearable sensors91. Key features include customisable survey tools, secure cloud-based data storage, real-time participant involvement monitoring, and built-in analytics capabilities91,92. Its intuitive design and minimal programming requirements make it particularly suitable for research teams with limited technical resources, enabling efficient data management across diverse study designs91. Choosing the appropriate solution depends on several factors such as device compatibility, sensor types and data granularity, security needs, network availability and budget.

Unstable network connectivity remains another major obstacle84, particularly in LMIC settings where DP applications may operate in environments with intermittent internet access. Offline data storage with periodic uploads can mitigate this issue by allowing sensor data to be stored locally on the device and transmitted to cloud servers only when a stable connection is available40. This approach reduces the likelihood of data loss and synchronization issues40. Data compression techniques further enhance network reliability by reducing the size of transmitted data, conserving bandwidth and ensuring faster uploads93,94. Similarly, edge computing which is the processing of data closer to where it is generated rather than relying on distant cloud servers enables real-time data processing directly on the device, reducing reliance on cloud-based computation93,95. By analysing and filtering data locally before transmission, edge computing minimizes network dependency, enhances processing efficiency, and strengthens privacy protections95.

Integrating energy-efficient sensing and network-independent data transmission methods is crucial for improving the sustainability and scalability of DP. Future research should continue refining these approaches to ensure that sensor-based monitoring remains reliable, particularly in diverse and resource-constrained settings.

Addressing user-centred challenges

User engagement and participation

Engaging individuals in DP initiatives requires thoughtful design that aligns with users’ values, preferences, and lived experiences.

Maintaining user participation and engagement with wearables and apps is a persistent challenge. Many individuals choose to disengage not only because of discomfort, forgetfulness, or lack of perceived benefit96, but also because the technology may offer little actual value to them, or it may present physical, cognitive, or contextual barriers97,98. Importantly, these decisions are not mere lapses in behaviour, but informed acts of disengagement shaped by unmet needs or unaddressed concern99. For example, studies reported that participants discontinued the use of digital tools due to sensory discomfort, lack of clarity on the utility of the data, and difficulties navigating interfaces that were not adapted to their cognitive or literacy level100. Similarly, barriers such as low digital literacy, mental health symptoms (e.g., fatigue, paranoia), and physical limitations can significantly impact a person’s ability to engage meaningfully with DP tools101,102,103.

Technical interruptions, such as forgetting to charge or wear the device82,96,104, are often secondary to more complex issues, such as poorly designed interfaces, lack of support or training, and limited adaptability to users’ routines and needs105. This introduces challenges in the continuity of data collection and can compromise the reliability of insights derived from the data25,106.

To address these challenges, future DP efforts must prioritise participatory design, where users are involved throughout the development process to ensure accessibility, relevance and inclusivity. Designs must accommodate varying cognitive abilities, language proficiency and sensory needs to support equitable engagement107.

Privacy concerns

Privacy concerns represent a significant barrier to DP technologies3,82,108,109. These methods inherently involve the collection of highly sensitive data, including behavioural, physiological and contextual information, which can provide deep insights into an individual’s health, and habits108,109. However, the very richness of this data also heightens users’ fears about misuse, unauthorised access, and potential breaches of confidentiality82,109. A significant driver of these concerns is the lack of transparency in data usage policies108. Many users are unclear about how their data is collected, processed and shared, leading to scepticism and distrust109. While some companies have made progress in simplifying their privacy documentation, many users still find it difficult to understand how their data is collected, processed, stored, or shared53,108,109. For instance, Fitbit’s privacy policy outlines core purposes for data use such as improving product functionality, personalising recommendations, enhancing cybersecurity, and fulfilling legal obligations (e.g., responding to subpoenas or law enforcement requests)110.

Ensuring robust data security is a significant issue as well108. Cybersecurity threats, including data breaches and hacking, can expose sensitive information to malicious actors109. For instance, real-time data transmissions from wearables or smartphone apps are particularly vulnerable to interception if not encrypted appropriately82,108,109. Furthermore, once collected, storing large datasets securely remains a challenge, especially for smaller organisations or research groups with limited resources57,82. These challenges highlight the urgent need for stronger regulatory frameworks, user-centric privacy safeguards and transparent data governance to build trust and ensure the ethical implementation of DP.

Building trust is critical for addressing privacy concerns of DP104,109. Developers of DP tools must implement stringent privacy safeguards, including end-to-end encryption, secure data storage protocols and multi-factor authentication to protect user data109. Further efforts must be made to develop decentralised machine learning techniques to protect user’s data. Transparent and user-friendly consent processes are equally vital, empowering users to make informed decisions about their participation108. Regular communication about data usage, anonymisation efforts and security measures can further reassure users about their data’s safety81. Ultimately, addressing privacy concerns is not just about compliance with legal standards; it is imperative to respect and protect the rights and autonomy of those who entrust their data to DP technologies108,109. Building trust through stringent privacy safeguards, clear consent protocols, transparent AI and anonymisation techniques is essential.

Ethics must be a guiding principle in the development and deployment of DP technologies. The vast amounts of personal and sensitive data collected pose significant privacy risks3,109. Transparent and user-centric data governance models are crucial to building trust82,109. These models should include clear consent mechanisms, robust anonymisation protocols, and stringent data encryption standards81,82. Using research-focused platforms such as Labfront bypasses commercial manufacturer servers entirely. Data from compatible devices (e.g., Garmin wearables) is transmitted directly to Labfront’s secure cloud, enabling compliance with academic standards such as the Health Insurance Portability and Accountability Act (HIPAA) and Institutional Review Board (IRB) protocols91,92. These design choices reduce exposure to commercial data pipelines and provide researchers with greater control over data governance and participant confidentiality.

Cultural and socioeconomic barriers

The accessibility of sensor-based technologies varies widely across cultural and socioeconomic contexts111. High costs of devices and limited digital literacy in underserved populations can restrict participation in DP initiatives4,82,111. For example, in a study conducted in the United States among participants at a federally qualified health centre, cost was ranked relatively low among the barriers to adopting wearable technologies112, but this might not be the case in other settings. Furthermore, cultural attitudes towards technology and data sharing may affect willingness to engage4,113. Tailoring interventions to specific populations and ensuring equitable access are critical for global scalability.

Cultural sensitivity is a vital ethical consideration114. Mental health is profoundly influenced by cultural beliefs and practices, necessitating the co-design of interventions with local stakeholders114,115. Engaging communities in the development and testing of tools ensures their relevance, acceptability and effectiveness104. Additionally, emphasising transparency and inclusivity throughout the design and deployment process can foster trust among end-users and stakeholders alike104,109.

Lack of standardisation and strategies

Absence of universal protocols

The lack of universal protocols for data collection, processing and analysis represents a significant challenge for DP25. This absence of standardisation hinders the scalability, reproducibility and generalisability of research findings, creating barriers to the broader adoption and implementation of these technologies25,71. Currently, DP studies exhibit wide variability in key parameters such as data formats, sampling rates, device types and quality metrics71,72. Similarly, data formats vary across platforms and devices, complicating efforts to integrate and analyse datasets from multiple sources. This lack of consistency undermines the comparability of findings across studies, limiting opportunities for meta-analyses and cross-contextual validation25,71,72.

The variability also poses challenges for interpreting results and replicating studies. Without standardised protocols, it becomes difficult to determine whether differences in findings are due to true variations in the phenomena being studied or methodological inconsistencies25,71. Moreover, the absence of universal guidelines creates inefficiencies in data sharing and collaboration57,71. Researchers must often invest significant time and resources in reformatting and preprocessing data to make it compatible with their tools and methodologies. This inefficiency not only slows down the pace of advancement108,116,117 but also increases the risk of errors and misinterpretations71,72.

Addressing these challenges requires standardised guidelines and frameworks for DP to be established. These protocols should define best practices for data collection, including optimal sampling rates and acceptable device specifications, to ensure consistent data quality. They should also provide guidance on data preprocessing, feature extraction, and analytic methods, enabling more reliable and comparable outcomes. Ultimately, establishing universal protocols is not merely a technical necessity but a foundational step toward building trust in DP as a reliable and scalable tool for advancing personalised health and precision medicine. By promoting consistency, reproducibility and transparency, standardised guidelines can unlock the full potential of this emerging field.

Variability in methodologies across studies and platforms

Methodological variability is a significant obstacle in the advancement of DP, particularly in sensor-based data collection25. The use of diverse applications, devices and analytical approaches across studies amplifies these challenges, limiting the generalisability and scalability of findings25,71,72. Different studies often employ distinct apps and platforms for data collection, each with its own set of capabilities, data formats and compatibility requirements71,72. This diversity can result in inconsistencies in the types and quality of data captured.

Pre-processing steps, which are critical for preparing raw data for analysis, further contribute to variability24,71,72. Different studies adopt diverse techniques for handling missing data, outlier detection and noise reduction25. For instance, some research may use imputation methods to fill gaps in data, while others may discard incomplete data altogether, potentially biasing results71,72. Variations in feature extraction approaches, such as the choice of time windows or signal processing algorithms, further complicate cross-study comparisons25,72. The challenges extend into the machine learning pipeline. Variability in model selection, training protocols and evaluation metrics can lead to divergent findings, even when analysing similar datasets71,72,94. Additionally, the choice of algorithms ranging from traditional statistical models to complex prediction models, can introduce further inconsistencies25.

Collaborative efforts to align methodologies and share best practices are essential to overcome these challenges. Standardised protocols for data collection should prioritise interoperability across apps and devices, ensuring that data from multiple sources can be seamlessly integrated. Establishing guidelines for preprocessing steps, such as unified approaches to handling missing data and feature extraction, would help reduce variability and improve the comparability of datasets. In the machine learning domain, adopting shared evaluation frameworks and benchmarking practices would promote consistency. Researchers could benefit from using open-source platforms and repositories to share pre-trained models, annotated datasets and pipelines.

Strategies for standardisation

The advancement of DP as a reliable and scalable field requires robust strategies for standardisation across data collection, processing and analysis methodologies71. Standardisation ensures consistency, reproducibility and interoperability, ultimately enhancing the generalisability and utility of findings. This section explores three key strategies for achieving standardisation: developing universal frameworks, promoting cross-platform interoperability, and leveraging pilot projects for validation.

Developing universal frameworks

Establishing universal frameworks is foundational for standardising DP. Examples from other fields, such as Open mHealth and HL7 FHIR (Fast Healthcare Interoperability Resources), provide valuable blueprints118. Open mHealth offers standardised schemas for health data, allowing developers to integrate diverse datasets seamlessly118,119. HL7 FHIR, widely used in clinical informatics, standardises the exchange of electronic health records across healthcare systems, ensuring data compatibility and accessibility120. Adopting similar frameworks for DP can address the variability in data formats, sampling rates and quality metrics that currently hinder the field118.

Proposals for universal data formats, protocols and reporting standards are essential. A universal data format would specify how data from wearables, apps and other devices should be structured, annotated and stored, making it easier to integrate and analyse datasets from different sources118,120,121. Protocols should define best practices for data collection, such as recommended sampling frequencies and minimum data quality thresholds, ensuring that studies generate comparable and reliable data. Reporting standards would ensure transparency in methodologies, making it easier for researchers to replicate studies and evaluate findings118.

Global implementation and standardisation

Expanding the impact of DP globally requires coordinated efforts in standardisation and implementation. Standardisation provides a foundation for large-scale, cross-cultural research by ensuring consistency in data collection, storage and analysis24,72. Harmonising methodologies across studies will improve reproducibility, reduce variability and foster meta-analyses that yield generalisable findings71.

Standardisation efforts by creating universally accepted guidelines and protocols is important. For instance, adopting unified metrics for data quality, sampling rates and feature extraction can help bridge methodological gaps122. Platforms such as Open mHealth and HL7 FHIR offer promising frameworks that can be adapted for DP118,119.

Moreover, global implementation necessitates culturally adaptive solutions. Collaborations between researchers, policymakers and local communities can ensure that DP tools align with cultural norms and address linguistic and literacy barriers114,123. For example, apps designed for specific regions could incorporate local languages, cultural references and intuitive visual cues, promoting user engagement and participation. Equitable access is a pressing concern in global implementation114.

In addition to technical harmonisation and culturally adaptive solution, collaboration between industry and academia is crucial to achieve true cross-platform standardisation. Industry stakeholders, such as device manufacturers, OS developers and app creators must commit to aligning their products with emerging universal standards for data interoperability, transparency, and security. Without such alignment, even the most robust academic standards will face limited uptake. Conversely, academia brings domain expertise, ethical oversight, and implementation experience that can guide responsible technology design. Initiatives such as Open mHealth and HL7 FHIR exemplify how cross-sector cooperation can produce frameworks that are both technically sound and practically scalable71,118. Ongoing dialogue, co-creation, and shared governance between sectors are essential to ensure that DP tools are interoperable, equitable and responsive to end-user needs5,67,124.

Pilot projects and validation studies

Pilot projects and validation studies play a critical role in demonstrating the feasibility and benefits of standardisation efforts125. Case studies of successful standardisation initiatives can provide actionable insights and serve as models for broader implementation126. For instance, the RADAR-CNS (Remote Assessment of Disease and Relapse— Central Nervous System) project has successfully integrated data from multiple wearable devices and apps, demonstrating the potential for standardised data collection and analysis in the context of mental health research124.

Iterative testing is essential to refine protocols and address practical challenges125,126. Pilot studies should test the compatibility of proposed data formats and protocols across different devices and platforms, identifying any gaps or inconsistencies59,71,126. Validation studies can assess the reliability and accuracy of standardised methods in real-world settings, ensuring that they meet the needs of researchers, clinicians and participants alike126. These studies also provide opportunities to incorporate user feedback, ensuring that standardised approaches are practical and user-friendly104,126.

Outlook

Sensor-based data collection for DP represents a transformative approach to monitoring mental health and other conditions2,3,127. However, realising its full potential requires addressing critical challenges, including technical limitations25, user compliance82,104, privacy concerns82,109 and the lack of standardisation71. These barriers not only hinder the scalability and reliability of DP but also limit its adoption in diverse contexts, from high-resource settings to low- and middle-income countries.

Among these challenges, the absence of standardised methodologies stands out as a fundamental issue71,72. Variability in data formats, sampling rates and analytic techniques across studies creates inconsistencies that undermine reproducibility and comparability25. Without universal protocols, the field risks perpetuating fragmentation, slowing progress and reducing the generalisability of findings71,72. Standardisation offers a pathway to overcome these obstacles by fostering interoperability, enhancing data quality and enabling large-scale, cross-cultural research.

The role of standardisation extends beyond technical considerations. It has the potential to bridge divides between diverse stakeholders, including researchers, clinicians, policymakers and industry leaders. By adopting shared frameworks and open-source platforms, the field can facilitate collaboration and knowledge exchange, driving innovation and inclusivity71. Initiatives like Open mHealth and RADAR-CNS provide valuable blueprints, highlighting the feasibility and benefits of standardised approaches118,119,121,124.

The path forward calls for collective action. Researchers must work alongside technology developers to ensure that tools are interoperable and user-friendly. Policymakers should prioritise funding for standardisation initiatives and establish regulatory frameworks that promote transparency and equity. Industry players, including device manufacturers and app developers, must align their technologies with standardised guidelines to maximise their impact. Equally, end-users—patients, caregivers and community members—must be engaged throughout the development process to ensure solutions are culturally relevant and ethically sound.

By leveraging technological innovations, fostering global collaboration and embedding ethical principles, the field can evolve to provide scalable, reliable and culturally sensitive solutions for mental health care. In conclusion, sensor-based DP holds immense promise for advancing personalised health care and mental health interventions. By addressing the challenges of data collection and embracing the critical role of standardisation, the field can unlock its transformative potential. Collaborative efforts to establish universal frameworks will not only enhance the reliability and scalability of DP but also ensure its benefits are equitably distributed across populations. This shared vision will pave the way for a future where DP becomes an integral tool in improving global health outcomes.