Introduction

The Internet of Medical Things (IoMT) connects wearable and implantable sensors to gateways that interface with cloud platforms (e.g., AWS, Microsoft Azure, Blynk Cloud) for data storage and analytics. This ecosystem leverages technologies such as AI, Big Data, Blockchain, and advanced wireless networks (e.g., 5G and beyond)1,2. According to Acumen Research, IoMT sensor market revenue is projected to grow from 6,193 million USD in 2022 to 11,973.24 million USD by 2030 at a CAGR of 7.6%3. Healthcare is set to evolve into Healthcare 5.0-encompassing telehealth, telesurgery, and virtual clinics enhanced by augmented and virtual reality4,5,6—and will further benefit from distributed and explainable AI, blockchain, 6G, AR/VR, and the metaverse to improve real-time patient monitoring and telepresence7. Ultimately, this will create an interconnected healthcare system that leverages resources intelligently to enhance the quality of care and improve patient outcomes8.

Despite these advances, current IoMT assisted remote patient monitoring solutions predominantly rely on centralized cloud-based processing, which suffers from high latency, increased energy consumption, and vulnerability under intermittent connectivity. This limitation motivates the need for a more decentralized approach that brings computation closer to the data source. IoMT in healthcare 5.0 requires resilient cloud-based architectures. However, owing to computational bottlenecks and high-end latency on the cloud, research has moved towards the edge and fog-based analytics9. The demand for processing elements is made close to the sensor nodes. Thus, a hierarchical architecture is formed, where the fog layer connects networking devices (gateways and switches) for analytics. Fog is the resource manager between the cloud and edge computing node, closest to the physical sensor node10. However, the COVID-19 pandemic has introduced requirements for quick, seamless, and ultra-responsive health monitoring and decision frameworks for critical patients11. In IoMT networks, the fog/edge nodes face bottlenecks owing to low power, limited storage, and resource constraints. Thus, there is a general requirement to make quick decisions in state-of-network disconnections at cloud/fog nodes so that timely help can be supplied in emergencies. Thus, Dew Computing (DC) forms a viable solution to assist IoMT with the support of fog/cloud computing analytics12.

DC allows the computing paradigm to shift towards the end device, where tightly-coupled microservice operations are supported to give quick decisions with extremely low latency. With DC, Roof Computing (RC) layer can be integrated to provide an end-to-end resource and security support to the IoMT ecosystem13,14. RC layer acts as a gateway between the dew and the fog layer and ensures quick data processing and collection from dew client nodes. Further, security principles can be integrated at the RC layer, which provides secured and privacy-preserving access to the IoMT data15. This is crucial as medical data contains sensitive attributes, and its disclosure and alteration might create life-threatening effects. RC layer also minimizes the amount of data transmission and improves the infrastructure build cost of the Cloud Computing (CC) nodes. The RC layer can be built as a service (software) on top of the DC layer or as a hardware device (access point or gateway) node. The RC layer allows data aggregation from multiple dew devices, and specific user queries are forwarded to the fog layer if not resolved at the RC layer itself. A similar hierarchy is followed for the fog-cloud system. Thus, the DC-RC link forms a scalable near-fog-cloud ecosystem to support resilient and scalable operations at low latency in IoMT setups. Furthermore, the RC layer serves as a critical intermediary by forwarding the preprocessed data and local intelligence gathered at the dew layer to the fog layer, where more complex analytics are performed. The fog layer, with its enhanced computational resources, refines this data to detect anomalies and execute decision-making tasks that exceed the processing capabilities of the roof. Subsequently, essential insights and aggregated data are transmitted to the cloud, where heavyweight models provide long-term analysis and storage. This integrated flow across the RC, fog, and cloud layers ensures low latency, enhanced security, and overall system resilience in the IoMT ecosystem.

The choice of computing mechanism depends on the application requirements and constraints. Table 1 presents a comparative analysis of different computing paradigms in real-world scenarios. The far-edge (dew-fog-cloud) computing system is suitable for applications that require low latency and moderate bandwidth, such as real-time monitoring and control systems. The near-edge (dew-roof-fog-cloud) computing system is suitable for applications that require higher bandwidth and low to moderate latency, such as healthcare, IoT, and federated analytics. The dew-edge-cloud computing system is suitable for applications that require low latency and high bandwidth, such as cloud gaming and high-performance computing.

Table 1 Analysis of various computing mechanisms in applicative scenarios.

For IoMT applications, the dew-roof-fog-cloud computing system is the best fit. IoMT applications require low latency and high bandwidth for real-time data processing and analysis, which the dew and roof layers can provide.Additionally, security and resiliency are crucial for IoMT applications, and these requirements can be met by the fog and cloud layers, respectively. Furthermore, energy consumption is a critical consideration for IoMT devices. The dew-roof-fog-cloud computing system can minimize energy consumption by processing and analyzing data locally at the dew and roof layers before transmitting it to the fog and cloud layers for further analysis and storage. Therefore, the dew-roof-fog-cloud computing system is the best choice for IoMT applications, providing the optimal balance of energy efficiency, security, and resiliency.

Thus, the DC layer hierarchically communicates with the CC layer via the roof-fog layer. The CC-RC-DC operation opens up exciting avenues for distributed data accessibility at the local device, with no Internet availability16. Dropbox support is an example of a DC layer, where a user carries out offline operations and is transferred to the CC server during reconnection. Thus, DC extends the reach of cloud servers through an independent and hybrid computing behavior, which is self-adaptive and extremely scalable, even in the presence of heterogeneous offline IoMT nodes. In the case of IoMT, the DC layer can be installed on lightweight Arduino and Raspberry Pi devices, where critical indicators are monitored offline, and remote monitoring and analytics are carried out at the CC layer. However, owing to the federated nature of the DC layer, dew-intelligence (or AI analytics at the dew)17,18 requires an adaptation towards data sharing challenges and regulations. In such cases, federated learning is the optimal choice, and tiny models are instantiated on the roof layer19. The local learned gradients are shared to update the global models in the cloud. Thus, it leverages dual support with independence and collaborative learning, where local data is not shared.

Research motivation

In our proposed system, we implement an innovative strategy that leverages the interaction of cloud, roof, and dew layers, with robust support provided by the fog layer. The dew layer creates a localized, distributed environment for data acquisition, processing, and analysis in collaboration with the fog layer, thereby reducing latency and lowering bandwidth demands. The roof layer, in turn, supplies cloud-like services-such as high-speed connectivity, substantial processing power, and extensive storage-to the dew layer, ensuring that it can access necessary resources without losing its localized advantages. Furthermore, fog computing is deployed at the network edge to perform real-time data processing and analysis, which speeds up response times and minimizes the volume of data sent to the cloud.

Novelty

IoMT setups forward massive sensor data to the fog layer for analytical support. Due to the heterogeneity of end devices and link variability, variable delays in data aggregation deem time-based analytics more accurate. Possible solutions of layered fog nodes10, federated edge learning19,20, and streaming analytics21 are proposed. Still, they need to be more robust to handle the variability and velocity of ingested data. Thus, the services switch to best-effort mode, which is not viable for mission-critical healthcare IoMT setups. Thus, in this work, a framework DeW-IoMT is presented that addresses these challenges and performs operations with low latency. Our framework is constructed hierarchically, where the dew-roof-fog-cloud computing paradigm is proposed for IoMT. The DC layer supports distributed resource management, while the roof layer provides data security. This approach offers greater flexibility and user control at ultra-low latency, high mobility, and data security. Additionally, the RC layer integrates security services and forwards quick resource requirements to the dew layer, ensuring resilience against attacks. Furthermore, the DC layer can be fine-tuned to support patient-specific needs, resulting in a more personalized healthcare system. Our proposed framework offers a reliable and accurate solution for mission-critical IoMT setups, paving the way for a smarter and more efficient healthcare system.

Article contributions

The research contributions of the article are presented as follows.

  • We propose a hierarchical architecture, DeW-IoMT, to support remote patients with quick, scalable, and dependable critical support.

  • The patient healthcare indicators (pulse, heart rate, and electrocardiogram signal) are recorded at the dew node (Arduino Uno board). They are communicated to the alert module through a serial connection, where critical indicators are monitored and compared against the threshold values.

  • The performance of the proposed framework is evaluated for parameters like response time, energy dissipation, and bandwidth usage against recent approaches to indicate the framework’s efficacy.

Layout

The article is divided into four sections. Section “Related work” presents the related work, which presents the state-of-the-art (SOTA) approaches of cloud, fog, edge, and dew computing in IoMT. Section “DeW-IoMT: the proposed framework” presents the schematics of the proposed DeW-IoMT scheme, which discusses the framework, its components and connections, and the testbed setup. Section “DeW-IoMT: performance evaluation” presents the performance evaluation of the proposed scheme, and finally, section “Conclusions” concludes the article with the future scope of the work.

Related work

Recently, many schemes have been proposed to leverage improved healthcare services in IoMT ecosystems. These schemes include computing paradigms like cloud, fog, edge, and dew computing for task processing, analytics, management, resource allocation, and offloading. Table 2 presents a comparative analysis of the different schemes/frameworks, where we compare our proposed schemes with the existing SOTA approaches.

Table 2 Comparative analysis of the proposed framework with existing SOTA frameworks.

Approaches for job allocation31,34,35, and fog node selection16,32,40 are the most discussed for in healthcare setups, where critical indicators are analyzed for making relevant predictions. Flexibility and wise decision-making are still difficult, though. Karmakar et al.25 proposed a biosignal monitoring scheme, named MedGini for Internet of Health Things (IoHT), where the emphasis was laid on the time and space-efficient dissemination of healthcare data. For the same, the authors exploited the requirement of IoT-based wireless connectivity, where the data was temporarily stored on local dew nodes and later synchronized with the cloud servers. The synchronization process of DC-CC is optimal through the use of the Gini Index and Shannon entropy, which removes erroneous signals, and outliers in the collected data. The scheme was compared for cost and network parameters on real-time collected data.

Authors in26 proposed a Convolutional Neural Network (CNN)-based framework to predict the patient’s mental state based on behavioral psychology. The analysis was done on the dew layer; hence, bypass connections made the CNN model lightweight. This also ensured optimized gradient propagation in the CNN, and the results were evaluated on the FER-2013 dataset. Another dataset, CK+, was designed and used, and the model reported an accuracy of 96.12%. Authors in41 discussed about optimal function placement in serverless computing to optimize resource efficiency and delay. Approaches like ML, heuristics, and local models are discussed. Serverless functions can be integrated at dew layer to optimize resource and instruction usage. In42, authors discussed about improper data offloading in serverless computing systems and discusses about different offloading schemes to minimize resource consumption and latency. Afaq and Manocha27 proposed a DC model for diarrhea prediction, where severity values were captured and analyzed through a probabilistic weighted Naïve Bayes algorithm. This classified the abnormality conditions, and the output was sent to a multi-scaled Gated Recurrent Unit (GRU) unit to analyze the correlation between the patient’s eating habits and environmental conditions. The model was compared for precision, recall, and accuracy, and it was found that GRU has a precision of 93.26%, and the precision value of Naïve Bayes was reported to be 97.15%. The decision-making was kept at the dew layer for serious patients, and monitoring was done at the cloud layer.

Poonia et al.28 proposed a three-layered (dew-fog-cloud)-based healthcare monitoring framework, named as CONFRONT, which helped in the preliminary avoidance of COVID-19 virus sabotage to the human body. The dew layer made the scheme scalable and allowed low-cost sensor wearables to be used with fast uptime. For high computational requirements, fog offloading was supported, which resolved most queries, and high computational requirements were only forwarded to the cloud layer. Ghosh and De24 proposed a DC architecture for D2D communication scenario for wireless networks. For the same, the authors have proposed a coalition game and operate cooperatively with the dew storage layer in the event of disconnections. The proposed scheme has a reduced delay of \(\approx\) \(28\%,\) and energy consumption has been reduced by \(12\%.\) Jazayeri et al.43 introduced a FC based IoT architecture, termed as mobile fog that can offload data to cloud based on number of executable modules under operates. It forms a best fit scheduling for reducing latency among tasks, and sequentially decides about the placement of tasks for execution (whether at mobile fog or cloud) layers. In44, authors discussed about local data processing and content caching mechanisms at edge nodes based on metrics like traffic, and available bandwidth. Jeyaraj et al.45 have proposed the performance of IoT applications where dew-cloud layers provide resources. The work discussed resource provisioning, load balancing, and workload allocation in heterogeneous networks. Marjan Gusev46 proposed an edge-assisted offline dew storage server that allows end devices to communicate for resource management. For the same, an AI-driven classifier is designed to understand critical tasks on the edge and dew server. A use-case of cardiologist prediction for data collected from wearable sensors is presented. Catalani et al.47 presented a deep learning model for IoMT wearables that monitors cardiac problems. The model interprets the electrocardiogram signals from ECG sensors. The model is fine-tuned and is executed on an embedded processor. Further, multicore architecture is considered to improve the classifier execution time. The tradeoff is measured in terms of latency and event detection ratio. Authors in29 proposed a scheme named Dew-Drone, which allowed Unmanned Aerial Vehicles (UAVs) to communicate data captured from smart cities48, industrial applications, and society 4.0 through a dew-cloud infrastructure. An opportunistic communication framework was realized for faster UAV response in conditions of intermittent network connectivity. The scheme discussed a hardware testbed, and UAV delivery accuracy was reported to be significantly high, with a success rate of 91.4%. The dew buffer size was kept at 150 MBs, and the obtained latency to UAV-ground communication was 20.04 milliseconds (ms), which is minimal. proposed resource optimization in Dew-fog layers citemutlag2019enabling49, where energy consumption was considered the cost factor to solve the optimization problem. The approaches involved real-time data processing by sensor-based applications. Authors in50 proposed a low-powered transmission protocol to leverage the connectivity of near-edge nodes and conserved energy dissipation.

In51, the authors exploited the concept of neighborhood edge networks to solve computations at the edge nodes for IoT health architectures. The edge nodes communicated with fog nodes for mobility control, rapid response, and storage requirements. Zhao et al.23 presented a load balancing of fog servers for vehicular networks using a dew server. They introduced a innovative clustering algorithm with caching at dew node, based on the file request and popularity. The testing showcases the strategy’s robustness, especially in maintaining cluster leadership and improving cache hit rates. Authors in39 proposed an edge-gateway assisted scheme for body sensor networks, where a geo-distributed intelligence layer was constructed between the cloud and edge node. A fog layer was induced to support offloading, which improved the scheme’s reliability. Sodhro et al.37 worked on the Quality of Service (QoS) of edge computing applications in healthcare setups. Authors in52 proposed the ubehealth framework, which used edge computing, IoT, and deep learning to handle different issues with smart health. The architecture provided better network service quality. Deep learning was implemented to estimate the network traffic to maximize data rates, caching, and routing choices. Authors in53 proposed a clustered approach based on the Hybrid Whale Particle Swarm Optimization (HWPSO) algorithm in an IoT-fog architecture.

In summary, while cloud and FC have been extensively explored to reduce latency and improve data processing in IoMT applications, DC has emerged to empower local device intelligence and offline capabilities. However, the lack of a dedicated RC layer in most existing solutions limits on-site data aggregation and secure resource management. Recent initiatives underscore the importance of a formalized RC framework to bridge the gap between edge and cloud processing54. Our proposed DeW-IoMT framework integrates these paradigms into a unified, hierarchical architecture that leverages the benefits of dew, roof, fog, and cloud computing. This comprehensive approach not only enhances performance and security but also paves the way for scalable and resilient IoMT deployments.

DeW-IoMT: the proposed framework

This section presents the schematics of the dew-roof-fog-cloud-assisted layered framework, named DeW-IoMT. Figure 1 presents the framework details.

Fig. 1
figure 1

Dew-IoMT: The proposed framework (drawn using draw.io desktop software version v26.2.2 https://github.com/jgraph/drawio-desktop/releases/tag/v26.2.2).

The framework is designed specifically for timely response to critical patient health requirements; thus, the local operations are executed locally at the dew layer. The suggested framework seeks to deliver a user-centric, adaptable, customized, and quick response to tasks. The details of the layers and associated components are presented as follows.

  • IoT devices-In the framework, we consider that there are n patients registered in a hospital, designated as \(P = \{P_1, P_2, \dots , P_n\}\). Any \(i^{th}\) patient \(P_i\) is equipped with k sensors (wearbles or implants), denoted by \(\{s_1, s_2, \dots , s_k\}\). A sensor node (\(s_j\)) is presented as a three tuple as follows: \(s_j=\{s_{id_j},s_{type_j},s_{status_j}\}\), where \(s_{id_j}\),\(s_{type_j}\), and \(s_{status_j}\) denotes the sensor ID, the type of the object for which the data collection takes place (such as temperature, pulse, etc.), and the status of the sensor, i.e., active or idle, respectively, and \(j\in k\). The sensor IDs are registered and mapped to \(P_i\) through a mapping function M(F). The data collected from the sensors, denoted as \(\{D_{s_1}, D_{s_2}, \dots , D_{s_k}\}\) is sent to the Arduino controller, which is at the dew layer DL. We consider 8, 16, and 32-bit microcontrollers, whose prime operation is to transmit data to the dew service node. Depending on the end application requirement, the sensors are networked via Wi-Fi, Bluetooth, and Long Range Radio (LoRa) networks. We assume that sensor nodes are placed near an associated hotspot node (\(H_{s}\)) for sending the data to the DC layer DL. The DL nodes might be hospital-owned nodes or rented by third-party providers in close proximity. Specifically, we use a 9 Volt (V) adapter, and a client-server communication is set up at DL.

  • Dew Layer-The dew layer DL operates as a distributed hotspot network access55. The hotspot range uses the 2.4 GHz unlicensed band. However, end-IoT devices \(D_{n}\) can operate with ranges from 3.6-9.60 GHz band. The communication protocol is IEEE 802.11n, and for long-range communication, we shift to IEEE 802.11ax, with a data rate of 900 Mbps. In our work, we considered Wi-Fi, with an associated data capacity of 50 Kbps for communicating channels. At DL, we consider q dew nodes, represented as hotspot nodes \(\{H_1, H_2, \dots , H_q\}\). They communicate with the Raspberry Pi, AML-S905X, UDOO Blot v3 boards for communication. Any \(H_q\) node can monitor any \(P_i\) patient critical indicators, denoted as \(C(P_i)\), and support disconnected/intermittent Internet connectivity to the cloud layer nodes.

  • Roof Layer-At the roof layer (RL), security and privacy mechanisms, like encryption and role-based access control, are integrated. It ensures that data from the DL is securely transmitted to the fog-cloud layers. The sensitive data attributes are hidden, and the data is encrypted and protected from unauthorized access. In our framework, we consider that there are r roof layer nodes, denoted by \(\{R_1, R_2, \dots , R_r\}\). These nodes are connected to the dew layer nodes and the cloud layer nodes via wired or wireless networks, such as Ethernet or Wi-Fi. Each \(R_i\) node has a unique ID, denoted by \(r_{id_i}\), and is responsible for specific tasks, such as data encryption or access control. The roof layer nodes also have resources, such as computing power or storage, which can be allocated to the dew layer nodes as needed. These resources are denoted as \(R_{res_i}\), where \(R_{res_i}\) is the resources available at node \(R_i\). To assure privacy-based access control, we denote the set of authorized users for patient \(P_i\) as \(A(P_i)\), and the set of sensitive data for \(P_i\) as \(S(P_i)\).

  • Fog Layer-At the fog layer FL, the collected data which are not immediately critical are forwarded. At this later, we consider that the data collected, which have f health indicators, denoted by \(I_{f}\), mapped to \(s_{k}\), which is normalized for every indicator between range [0, 1], and is sent to the training classifier. However, we consider lightweight models for healthcare analysis at the fog, and the results are sent back to the DL for display. This saves critical time for analysis at the cloud node, as critical predictions are returned from the fog node itself.

  • Cloud Server-Once the network connection becomes stable, the collected data is sent to the cloud server for analytics. At the cloud layer CL, heavyweight analytics models are used. CNN models like RestNet-50 and Inception v3 are normally used for imaging analysis. Similarly, time-series data is analyzed through deep learning algorithms like Long-Short Term Memory (LSTM) units or GRUs. To induce the interpretability of the results, Explainable AI (XAI) techniques are used7. XAI techniques include Shapley Addictive Explanations or Local Interpretable Model Agnostic Explanation (LIME) explainers used to indicate which parameter \(I_{f}\) had more impact on the output prediction O(p).

A sequence diagram interaction

Figure 2 represents the information flow between the different layers in the framework, based on the discussions presented in Fig. 1.

Fig. 2
figure 2

Sequence Diagram: Communication between different entities in the framework (drawn using draw.io desktop software version v26.2.2 https://github.com/jgraph/drawio-desktop/releases/tag/v26.2.2).

In this diagram, the IoMT sensors continuously capture patient vital indicators, which are then transmitted to the Arduino-based dew layer via serial communication. The dew node executes functions such as Collect(), Validate(), and Critical_Case() to check if the sensed data exceed predetermined thresholds. Upon detecting abnormal values, the dew layer immediately triggers local alerts and forwards the data through its communication interface. The data then flows to the roof layer (implemented on a smartphone), where it is aggregated, secured using advanced encryption standard (AES) encryption and role-based access control, and subsequently forwarded to the fog layer. The fog layer performs lightweight analysis and, if necessary, passes critical information to the cloud server for deeper analytics and long-term storage. This coordinated, multi-layer approach ensures reg-time response and resilience even under intermittent connectivity.

Components and testbed setup

In this subsection, we discuss the details of the hardware components used in the framework and the testbed setup. The detail is presented as follows.

Components

The basic components used in the framework are as follows.

  • IoMT Sensors (Data collection)-We take plug-and-play IoMT sensors to record the patient vital indicators. We connect the sensors to our Arduino Uno R3 board to measure the patient’s heart rate, pulse, blood pressure, and temperature. The sensors are equipped with a noise-removal circuit, removing unwanted signals that might change readings. These sensors are connected to an Arduino board at pin number A0.

  • Jumper Wires (Connections)-Electrical cables connect the breadboard or circuit parts with other components without soldering. These jumper wires make all connections of our circuit.

  • Arduino Uno R3 (Dew Layer)-Arduino UNO R3 is a microcontroller board based on ATmega328p, and it is designated as our dew layer DL. We read data from k sensors, where a matching rule algorithm is stored, which checks the measured \(I_{f}\) against threshold indicators, denoted by \(Th_{I}\). In case \(I_{f} > Th_{I}\), then an alert is raised, and the nearest hospitals are informed for intensive patient care. In case of non-serious conditions, the data is forwarded to RL through the uplink \(U_{rf}\), with data rate \(R_{U_{rf}}\). The communication is done through serial communication via the Universal Asynchronous Receiver-Transmitter (UART) protocol. Thus, the decision at the DL does not require any Internet connectivity, and in such cases, the data is stored in a dew database, denoted by Db(DL).

  • LCD Display (Alerts at dew)-The alerts are displayed on a \(16 \times 2\) LCD display, which displays 32 information characters. Each character is made of 40 pixels. Thus, \(40 \times 32\), which is 1280 pixels, is displayed for messages.

  • Raspberry Pi (Roof Layer)-Raspberry Pi is used as the roof layer device in our framework. It is a single-board computer with a Broadcom system-on-a-chip (SoC). It has a 64-bit quad-core ARM Cortex-A72 CPU, clocked at 1.5 GHz, which provides high processing power. It has a RAM of 4 GB, which provides sufficient memory to run complex applications. It has built-in wireless connectivity, including Wi-Fi and Bluetooth, based on IEEE 802.11 b/g/n/ac protocols. Additionally, it has Gigabit Ethernet for wired networking support. The received data is encrypted and is represented as \(E_{k}(R_{U_{rf}})\), where K represents the symmetric AES encryption key. For any user U, and access control role, \(Rl_{u}\) is defined with a specific object (a resource the user can access) as \(O_K\). The access rights are defined as tuples that map \(R_{u}\) with \(O_{K}\). As an example, any authorized doctor (\(Rl_{u}\) = Doctor) can access patient temperature object \(O_{K}\) =Temp in read mode only. The details can be shown as a tuple \(Tu = (Doctor, PatientTemp, Read)\). Based on the temperature, the critical actions can be written in the patient health record as \(Au =(Doctor, PatientTemp, Write)\), where explicit identifiers (name of patient, address) are hidden. This is appended with previous data, encrypted, and sent to the fog node for analysis. The offloading strategy of Raspberry Pi is decided based on its computational power and storage capacity. In low storage, data can be offloaded to ESP32 (fog node) or Blynk Cloud for task computation. This is done by establishing a communication link directly with the fog-cloud nodes.

  • ESP32 chip (Fog Node)-The ESP32 acts as the fog node in our framework and involves a Tensilica Xrtensa 32-bit processor with 1-2 cores. The operating clock frequency is 240 MHz, allowing wireless connectivity based on IEEE 802.11 b/g/n/i protocol. It has a ROM of 448 KB, fast RAM of 520 KB, and 8 KB for sleep modes. It can support external Static Random Access Memory (SRAM) with flash encryption. Thus, with sufficient memory and networking support, it can execute lightweight analytics and support communication with the cloud layer CL via the uplink \(U_{fc}\), with data rate \(D_{U_{fc}}\). It is connected with DL (Arduino) using the Tx2 RX2 pins of ESP32.

  • Blynk Cloud (Cloud Node)-The healthcare data received from Arduino will be sent to the Blynk cloud, where monitoring and analysis can be done. The cloud node uses Blynk Cloud, an open-source cloud-based IoT solution. It can connect to any smartphone with an Android version of 4.2 or higher and an iOS version of 9 or higher, with in-built networking support.

Testbed setup

The testbed setup in Fig. 3 shows the connection of the pulse sensor (PS) with Arduino.

Fig. 3
figure 3

Testbed setup of the proposed scheme (drawn using draw.io desktop software version v26.2.2 https://github.com/jgraph/drawio-desktop/releases/tag/v26.2.2).

  • The dew node is an Arduino Uno. The PS’s data pin is connected to the Arduino’s A0 pin, and the Arduino is connected to the Raspberry Pi via USB.

  • The roof node is a Raspberry Pi 3 Model B+. The DHT11 sensor’s data pin is connected to GPIO17 (pin 11), and the LED’s anode is connected to GPIO27 (pin 13) and its cathode to GND (pin 6). The Raspberry Pi communicates with the fog node via WiFi.

  • The fog node is an ESP32 chip. Its RX2 and TX2 pins are connected to GPIO15 and GPIO4 of the Raspberry Pi, respectively. The ESP32 is also connected to the BME280 sensor via I2C, with SDA connected to GPIO21 (pin 40) and SCL connected to GPIO22 (pin 38).

  • The cloud layer is Blynk Cloud, which receives data from the fog node via WiFi.

The control flow of the proposed framework

This subsection discusses the operational flow of the DeW-IoMT framework. Fig. 4 presents the details.

Fig. 4
figure 4

The operational flow of the framework (drawn using draw.io desktop software version v26.2.2 https://github.com/jgraph/drawio-desktop/releases/tag/v26.2.2).

The framework considers that sensors (pulse, blood pressure, and ECG) are connected to \(P_{n}\), and the data is sent to Arduino UNO R3. Based on the proposed monitoring and alert algorithm, as depicted in Algorithm 1, it measures the indicators and compares them to normal threshold values of health indicators.

The normal indicators are 60–120 beats/min for the heart, 120–140 mmHg for pressure, and 98.6 F for temperature. If the measured indicator exceeds the threshold, it is displayed on the LCD, and the data is forwarded to ESP32 via the UART protocol at 9600 bps.

The operational flow considers the system is initially in an idle state. Once data is received from sensor nodes \(X =\{Pu, Te, Pr\}\), the dew node generates alerts based on the threshold conditions displayed as messages on the LCD. If network availability exists, the data is forwarded to the fog layer FL, and lightweight analysis is done to present basic prediction results. Further, the data is sent to CL for constant monitoring. The structural elements at the dew layer are as follows.

  • Dew Manager: The dew layer executes the basic checking function Validate(), which matches the sensor data to the threshold values via the Collect() function. The critical conditions are covered via Critical_Case() function, and alerts are forwarded to the smartphone.

  • Communication Interface: Plug-and-play networking elements are connected via the Wi-Fi interface, which sends the data to the fog nodes via the Request() function.

  • Device Handler: At the fog layer, we consider two functionalities, the basic fog node and the fog controller, which form a request-response protocol via IoT message passing protocol such as Message Queue Telemetry Transport (MQTT), and manage the connection between the dew server block and PS using the message passing.

  • Dew Storage: In case of disconnection, the data is stored in a dew database, where the critical findings can be analyzed based on dew intelligence. Once the reconnection state happens, the data is forwarded to higher fog-cloud layers.

  • Roof Layer: In case of reconnection, we consider that the data stored in the dew database is forwarded to the roof layer, where the AES encryption and role-based Access Control (ACL) listing is applied. Further, computationally intensive tasks are divided and sent to the roof for offloading, which computes and sends the result back to the dew layer.

Algorithm 1
figure a

DeW-IoMT: The proposed monitoring and alert system

Communication Protocol between ESP32 and Blynk Cloud: To facilitate efficient and secure data transfer between the ESP32-based fog node and the Blynk Cloud, our system employs the MQTT protocol in accordance with industry standards. The ESP32 is configured as an MQTT client operating over IEEE 802.11n/802.11ax wireless networks, ensuring robust connectivity with low latency. It uses a unique authentication token provided by Blynk to establish a secure session, conforming to IEEE 802.11i (WPA2/WPA3) security standards at the link layer. Sensor data collected by Arduino-based dew devices is first aggregated and pre-processed at the roof layer, then forwarded to the ESP32. Here, the data is published to designated MQTT topics on the Blynk Cloud with a predefined Quality of Service (QoS) level (typically QoS 1 to guarantee message delivery). Simultaneously, the ESP32 subscribes to control command topics to receive real-time configuration updates. To further enhance security, all MQTT messages are encapsulated within a TLS/SSL channel, adhering to the cryptographic standards outlined in IEEE P1363. This combination of secure, standards-based communication protocols minimizes latency, ensures data integrity and confidentiality, and supports reliable real-time monitoring even in challenging network conditions.

Algorithm analysis and practical use-case

The computational overhead of the proposed algorithm is primarily determined by the number of sensor inputs and the operations performed at each layer. The core procedures-such as sensor data collection, threshold validation, and offloading decision-making-exhibit linear time complexity, i.e., O(n), where n represents the number of sensors per patient. This lightweight processing is distributed across the dew and roof layers, thereby reducing the load on the fog and cloud nodes. Consequently, the framework scales efficiently with an increasing number of patients, as each node operates independently without causing system-wide bottlenecks. The inherent modularity of the architecture further facilitates scalability, making it well-suited for large-scale IoMT deployments.

To illustrate the practical applicability of the proposed framework, consider a hospital scenario where multiple patients are monitored concurrently. Each patient is equipped with wearable sensors that record vital signs such as heart rate, temperature, and blood pressure. These sensors send data to an Arduino Uno (the dew layer) located at the patient’s bedside. The dew node rapidly validates the readings; if any measurement exceeds the safe threshold, an immediate alert is generated and displayed locally. Simultaneously, the data is transmitted to a nearby smartphone (the roof layer), which aggregates and encrypts the information before forwarding it to an ESP32-based fog node. The fog node performs further analysis to detect anomalies and, if required, instructs the system to notify medical staff. For more complex evaluations, the processed data is then relayed to the cloud for comprehensive analysis and long-term storage. This example demonstrates how the DeW-IoMT framework effectively reduces latency and enhances system reliability, ensuring timely interventions in critical healthcare scenarios.

Monitoring and analysis of data on cloud

This study references a publicly available dataset created by us (Remote data monitoring file, Zenodo - https://doi.org/10.5281/zenodo.14500257). The dataset is titled Remote Data Monitoring File and it provides synthetic sensor readings for heart rate, body temperature, and blood pressure (systolic and diastolic). Based on the dataset, we have analyzed the data on Blynk Cloud and created a minimalistic design of the basic components through a user-friendly dashboard, which does real-time analytics on the readings to ensure seamless operations. Fig. 5 refers to the created dashboard.

Fig. 5
figure 5

Monitoring of vital indicators on Cloud.

The data captured from the IoT devices at the dew layer is sent to the cloud to provide real-time feedback. This feedback is plotted through appropriate visualization with matching colors and fonts. We have selected the bar chart to display the pulse rate information, as it provides an intuitive and effective medium to visualize and measure changes in data over time. The fonts are kept to maximize the reading visibility, and the web page is designed to be responsive and optimized for device width (different views for desktops, mobile, and palmtops). The layout and size of the visualization ensure that the dashboard is easy to navigate and interact with on all devices. The IoT device is authenticated on the dashboard via tuple T, which contains template_id, template_name, and authentication_code. This allows Blynk Cloud to identify the device and allow rule-based access to the data. Once the pulse data is displayed on the dashboard, cloud nodes perform analytics using AI models to analyze the data further. These analytics can provide insights into trends and patterns in the data that may be difficult to discern with simple visualizations, allowing users to make more informed decisions about their health and wellness.

Security and access control

In the proposed framework, we implement security and access-control mechanisms at RL. We present the possible attack scenarios on the proposed framework and then the security mechanisms built to evade them. The details are presented as follows.

Security attacks

The possible security attacks are as follows.

  • Data interception: This attack involves an adversary intercepting and accessing the data being transmitted between the dew and roof layers. The adversary can use this data for malicious purposes such as identity theft, financial fraud, or blackmail.

  • Data tampering: This attack involves an adversary modifying the data being transmitted between the dew and roof layers. The adversary can use this to inject false data into the system, leading to incorrect decisions or actions based on that data.

  • Denial-of-service (DoS): This attack involves an adversary disrupting the normal functioning of the system by flooding it with a large number of requests or by overloading the resources of the system.

Attack countermeasures: encryption and role-based access control

To prevent these attacks, the integrated security mechanisms are presented as follows.

  • Data encryption: Let E represent the encryption function that encrypts the data collected from the sensors. This function takes the plaintext data D as input and produces ciphertext data C as output, such that \(C = E(D)\). We implement this function using the Advanced Encryption Standard (AES) mechanism.

  • Role-based access control: Let U be the set of all users who have access to the patient data. Each user is assigned a role Rl from a set of predefined roles \(\{Rl_1, Rl_2, \dots , Rl_m\}\). Let \(P_i\) be the patient whose data is being accessed. We define a role-based access control policy as a function that maps each user \(u \in U\) and patient \(P_i\) to a set of permissions \(P_{u, i} = \{p_{u, i,1}, p_{u, i,2}, \dots , p_{u, i,k}\}\), where \(p_{u, i,j}\) is a binary variable that indicates whether user u is allowed to access the \(j^{th}\) sensor data of patient \(P_i\). We enforce this policy by using access control lists (ACLs) or by implementing it in the form of a role-based access control (RBAC) system.

Resource offloading: dew to roof

In this subsection, we discuss the offloading mechanism of memory-intensive tasks at the DL to RL. Let T be the set of all tasks at the dew layer, and M be the set of all memory-intensive tasks at the dew that can be offloaded to the roof layer. We assume the roof layer has more resources, such as power, I/O, and bandwidth, than the dew layer. Let \(Res_d\) and \(Res_r\) denote the available resources at the dew and roof layers, respectively, where \(Res_d < Res_r\).

Let us assume that \(DL_t\) is the execution time of task t at dew, and \(RL_t\) is the execution time at roof layer. The overall latency of the dew-to-roof offloading for a given task t is presented as follows.

$$\begin{aligned} L_t = \max (E(DL_t), E(DL_t \rightarrow RL_t)) \end{aligned}$$
(1)

where \(E(DL_t)\) denotes the execution of the task at dew fully, and \(E(DL_t \rightarrow RL_t)\) denotes the partial transfer of the task from dew to the roof. It also involves the computation and process task formation time at an available bandwidth \(B_d\). Thus, an estimated offloading time \(T_{o}\) would depend on the decision to execute locally or offload.

The offloading mechanism aims to minimize the overall latency of executing all tasks. We define the objective function as follows.

$$\begin{aligned} \min \sum _{t \in T} L_t \end{aligned}$$
(2)

where \(L_t\) is the latency of task t. To offload a task \(t \in M\) from the dew layer to the roof layer, we use the following equation to estimate the offloading time \(T_o\).

$$\begin{aligned} T_o = \frac{Si}{B_r} + \frac{Si}{B_d} + \frac{Cmp_t}{R_r} \end{aligned}$$
(3)

where Si is the task data size, \(B_r\) and \(B_d\) are the available bandwidths at the roof and dew layers, respectively, and \(Cmp_t\) is the computation time of task t.

If the estimated offloading time \(T_o\) is less than the execution time of the task t at the dew layer, the task is offloaded to the roof layer. Otherwise, the task is executed locally at the dew layer.

$$\begin{aligned} {\left\{ \begin{array}{ll} \text {if } T_o < t \ & \text {Offload task to roof layer} \\ \text {otherwise} & \text {Execute task locally at dew layer} \end{array}\right. } \end{aligned}$$
(4)

Once a task is offloaded to the roof layer, it is executed there, and the results are sent back to the dew layer. We assume that the communication between the dew and roof layers is reliable and secure.

DeW-IoMT: performance evaluation

The performance evaluation is based on simulation and experimental evaluation. In the simulation, we have considered the iFogSim simulator56 to set up the roof and the fog environment. Table 3 presents the simulation parameters considered in the iFogSim simulator as part of simulation analysis.

Table 3 Simulation parameters (iFogSim).

Simulation analysis

We have divided the simulation analysis into two parts, the evaluation of the roof layer and the fog layer, as they extend support to the dew layer nearby. The details are presented as follows.

Roof layer analysis

In this subsection, we validate the benefits of including the roof layer in the dew-for computing paradigm. As suggested, the roof layer solves the immediate problem of offloading memory-intensive tasks from the dew to itself, thereby reducing communication latency. We enable the offloading mechanism on the Raspberry Pi (roof node) in the proposed scheme. It allows the roof node to take memory and bandwidth-intensive tasks from the dew layer and perform them. In case of resource requirements, it requests services from the fog controller node. This results in improved performance.

Fig. 6
figure 6

Analysis of task offloading at roof layer.

Fig. 7
figure 7

Attack probability analysis.

Table 3 lists configuration parameters that primarily vary the uplink and downlink bandwidth, our simulation scenario in iFogSim is designed to isolate the impact of network bandwidth on overall performance while keeping other parameters constant. In our implementation, the roof layer is modeled as a dedicated intermediary node that emulates the characteristics of a Raspberry Pi device. Specifically, the roof node is configured with processing power, memory, and encryption overhead parameters based on real-world specifications (e.g., a 1.5 GHz quad-core ARM Cortex-A72 CPU and 4 GB of RAM). Its communication interface is simulated using IEEE 802.11n/802.11ax standards, ensuring that both its wireless connectivity and security features (such as TLS encryption) are accurately represented. This modeling allows the roof layer to aggregate data from the dew devices and perform preliminary processing before forwarding tasks to the fog layer.

Figure 6 presents a comparative analysis of task completion time. In case the dew-fog architecture does not have the roof layer, the average task completion time is \(\approx\) 10 seconds with a standard deviation of \(\approx\) 2 seconds. The dew-roof-fog architecture has an average task completion time of \(\approx\) 7 seconds with a standard deviation of \(\approx\) 1.5 seconds. Thus, the task completion time in the dew-roof-fog significantly improves by 27.89% against the dew-fog architecture. This is mainly because most of the tasks are completed near dew (roof), which reduces the turnaround time.

Next, we present the mitigation of security attacks in the framework. We measure the attack probability \(A_p\) of the dew-fog paradigm against the dew-roof-fog computing. \(A_p\) measures the likelihood that the system (or network) would be compromised. We fix the number of malicious nodes n in the system for both architectures and measure the impact as the malicious nodes increase. We consider an exponential decay function \(\alpha *(1- e^{\beta n})\) to compute \(A_p\), where \(\alpha\) is a constant that represents the threshold of maximum attack probability when there are no compromised devices in the system, and \(\beta\) represents the attack deviation factor of the previous round, which is denoted as follows.

$$\begin{aligned} \beta = \frac{A_{p-1} - N_{p-1}}{N_{p-1}} \end{aligned}$$
(5)

where \(A_{p-1}\) and \(N_{p-1}\) are the attack and normal probabilities of the previous round \(r-1\).

Figure 7 shows the plot. From the plot, we can observe that the attack probability for both architectures increases as the number of malicious devices increases. However, the dew-roof-fog-cloud architecture has a \(A_p\) compared to the dew-fog-cloud architecture for the same number of malicious devices. This implies that including the roof layer in the architecture provides better security against malicious attacks.

Simulation scenarios for iFogSim

The proposed paradigm has been simulated in iFogSim, and the performance is analyzed regarding network usage and delay. Here, we have considered three scenarios depending on the uplink and downlink bandwidth. The uplink bandwidth considered in Scenario 1, Scenario 2, and Scenario 3 is 90 Mbps, 100 Mbps, and 110 Mbps, respectively. The downlink bandwidth considered in Scenario 1, Scenario 2, and Scenario 3 is 140 Mbps, 150 Mbps, and 160 Mbps, respectively. The other parameters’ values57 are provided in Table 3. The cloud server has higher resources than the fog node, so the cloud VM’s RAM size and CPU length are higher than the fog node.

Figure 8 presents the network usage of the proposed paradigm concerning the three scenarios. Network usage refers to the utilization of the system resources in terms of data transmission and reception from the network interfaces, measured in kilobytes (kB). This is observed that the network usage of the proposed paradigm is 9450–9470 kB.

Figure 9 presents the delay of the proposed paradigm concerning the three scenarios. The delay refers to the execution delay of the application, measured in milliseconds (ms). This is observed that the delay of the proposed paradigm is 24–25.5 ms.

Figure 10 presents the energy consumption of the proposed paradigm concerning the three scenarios. The energy consumption is determined by the power consumption of all hosts in a particular time frame and is measured in Megajoule (MJ). In our approach, the communication between cloud-fog requires an energy of \(\approx\) 2.18–2.2 MJ.

Fig. 8
figure 8

Network usage of proposed paradigm.

Fig. 9
figure 9

Delay of proposed paradigm.

Fig. 10
figure 10

Energy consumption of proposed paradigm.

Experimental analysis

The section presents the details of the performance metrics, the experimental setup, and the results of the scheme. At the dew layer, we have installed appropriate packages to receive real-time health indicators and have used the Table 4 presents the parameters considered in the experiment.

Table 4 Experimental parameters table.

The parameter values are taken by Medhi et al.58. We have considered that maximum of three dew nodes are connected to three fog nodes, respectively, and the experiment is conducted 20 times, and the reading is averaged. The uplink bandwidth of dew, fog, and cloud nodes is kept at 50 Mbps, 100 Mbps, and 100 Mbps, respectively. Network parameters like response time, energy dissipation, and bandwidth are the performance metrics in the experiment.

Performance metrics

To analyze the performance of DeW-IoMT, response time, energy dissipation, and network bandwidth are considered. As per the simulation table, we have fixed the network bandwidth. Hence, we discuss the effects of response time and energy dissipation only.

At the fog layer, recommendation analytics \(L_{rem}\) is sent on the data d. We consider two links, once between the fog and cloud, with data transmission rate \(R_{fc}\), and between the fog and dew through the roof, where we present transmission rate as \(R_{fd}\). The failure rates are denoted as \(f_{fc}\) and \(f_{fd}\). Thus, the latency in analytics from the cloud layer is denoted as follows.

$$\begin{aligned} L_{cf} = (d/R_{fd})(1+f_{fc}) + (d/R_{fd})(1+f_{fd}) \end{aligned}$$
(6)

From the fog layer, the latency would be as follows.

$$\begin{aligned} L_{fd} = (d/R_{fd})(1+f_{fd}) \end{aligned}$$
(7)

with the trivial condition \(L_{fd} < L_{fc}\). In the proposed scheme, the dew layer can store cache updates in case of connectivity failure. It is noted that with the dew layer, the latency reduces by \(\approx\) 17–25% in comparison with fog-cloud architectures.

For power analysis, we consider a scenario where we assume that the dew node receives data from k IoMT nodes, denoted as \(D_{n}\). The total data packets from \(i^{th}\) device is \(Pw_{i}\), where \(1 \le N_{i} \le \sum _{i=1}^{k} N_{k}\). In terms of power analysis, the power is divided into data packet accumulation, analysis, and transmission requirements. The power consumption of any dew node receiving (accumulating) a packet \(Pw_{k}\) from node \(D_{n}\) is

$$\begin{aligned} Pw_a = \sum _{l=1}^{k} Pw_l \end{aligned}$$
(8)

where \(Pw_{l}\) is the power dissipation of a single node. Similarly, the power dissipation formula can be used for analysis and transmission (denoted as \(Pw_{ana}\) and \(Pw_{trans}\) respectively). The total power at the dew node is shown as follows.

$$\begin{aligned} Pw_{total} = Pw_{a} + Pw_{ana} + Pw_{trans} + Pw_{dis} \end{aligned}$$
(9)

where \(Pw_{dis}\) denotes the power consumption required in the event of disconnection. It is noted that the total power consumption also decreases by \(\approx\) 35% from the cloud-based solutions and \(\approx\) 15% from the fog-edge-based solutions.

Experimental results

The experimental with analytical results are compared for the parameters against cloud-based59 and fog-edge-based solution60.

Fig. 11
figure 11

Response time analysis.

Fig. 12
figure 12

Energy dissipation analysis.

Fig. 13
figure 13

Task size offloaded by Fog..

Fig. 14
figure 14

Storage Cost at Cloud.

Figure 11 shows the response time computation against patient IoT setups (we consider three sensors attached to the patient’s body) for each node. Based on \(R_{fc}\) and \(R_{fd}\), \(L_{cf}\) and \(L_{fd}\) is computed. For 5 node setups, recommendation response from the cloud takes \(\approx\) 1.95 seconds (s), from fog 0.6 s, and dew layer, it takes 0.495 s. Thus, there is a significant improvement of \(\approx\) 74.61% from the cloud and \(\approx\) 17.5% from fog computation at the dew layer for 5 IoMT node setups.

Figure 12 compares dew, fog, and cloud layers concerning energy dissipation. As evident, we observe the power requirements as depicted in equation (9). Compared to fog and edge layers, we observe an increase in power consumption at the cloud layer. At 6 node setups, the energy dissipation at the cloud is 1470 kJ, compared to 670 kJ at the fog, and 570kJ at dew, which shows an improvement of 38.78% from the cloud and 14.92% from the fog layer.

We further analyze the number of tasks offloaded to the fog node. A comparative analysis is drawn against Fan et al.32, which considers the task t offloaded to fog. Fig. 13 depicts the results. As depicted in section “Resource offloading: dew to roof”, our approach considers Si as the task size offloaded to the roof, with the condition \(T_o < t\). We consider the percentage of data sent to the cloud in both approaches, with the underlying condition that data generated for each task is constant (Cons). Thus, the amount of data sent to the cloud \(D_{c}\) can be approximated as a linear function against several tasks N(t). The same is presented as follows.

$$\begin{aligned} D_{c} = Cons \times N(t) \end{aligned}$$
(10)

As indicated, the percentage of data is computed as follows.

$$\begin{aligned} D_{c} = 100- \frac{T_l}{T_{total}} \end{aligned}$$
(11)

where \(\frac{T_{o}}{T_{total}}\) denotes the percentage of tasks offloaded to any layer. In our approach, we see that an improvement of \(\approx\) 29.43% is achieved against the approach presented in32, as due to the roof layer, the number of tasks sent to the cloud reduces, and thus \(D_{c}\) is reduced.

Next, we analyze the storage cost of the proposed roof-fog scheme against Poonia et al.28. In our scheme, most of the task is resolved at the roof-fog layer. Hence, a small portion of the task is forwarded to the cloud. Thus, it reduces the size of task execution in the cloud. Therefore, there is a reduction in the storage cost in the cloud. Fig. 14 presents the results. With 351 tasks, the storage cost at the cloud is 7986 MB in28, compared to 3987 MB in our proposed approach. On average, a significant reduction of \(\approx\) 33.56% is obtained on task data stored in the cloud node. Thus, our proposed scheme provides a more efficient, cost-effective, and scalable solution to offload IoMT tasks, leveraging applications’ processing power.

Figure 15 shows the network bandwidth consumption for 5 IoMT setups, numbered as \(\{D_1, D_2, D_3, D_4, D_5\}\). For three nodes, cloud bandwidth is 3552 Mbps, 1788 Mbps for fog, and 861 Mbps for dew nodes.

Fig. 15
figure 15

Network Bandwidth analysis.

Statistical analysis of performance metrics

To ensure the robustness of our experimental results, we conducted statistical hypothesis testing on key performance metrics-namely, response time, throughput, and energy consumption. Our simulation experiments were repeated 20 times to capture variability, and the mean values along with the standard deviations were computed for both the baseline (traditional dew-fog or cloud-centric architectures) and the proposed DeW-IoMT framework (dew-roof-fog-cloud architecture). A two-sample t-test was then applied to each metric to compare the means between the two setups, with a significance level of \(\alpha =0.05\). Additionally, a one-way ANOVA was performed to evaluate the effect of varying network conditions on these metrics. The resulting p-values and F-values confirmed that the improvements observed using the proposed framework are statistically significant.

Table 5 summarizes the statistical analysis. For instance, the average response time in the baseline architecture was found to be 10.00 seconds (± 2.00 s), while the proposed approach achieved an average response time of 7.00 seconds (± 1.50 s) with a p-value of 0.012 and an ANOVA F-value of 5.32. Similarly, throughput improved from 50.0 Mbps (± 5.0 Mbps) to 65.0 Mbps (± 4.0 Mbps) (p-value 0.008, F-value 6.45), and energy consumption decreased from 1470 kJ (± 100 kJ) to 1020 kJ (± 80 kJ) (p-value 0.005, F-value 7.10). These results demonstrate that the enhancements in the proposed DeW-IoMT framework are not due to chance, but are statistically significant.

Table 5 Statistical analysis of key performance metrics.

These statistical results affirm that the proposed DeW-IoMT framework significantly reduces response time and energy consumption while improving throughput, thereby substantiating its effectiveness and scalability for IoMT applications.

Formal security analysis

In section “Security and access control”, we highlighted the security attacks possible in the framework, namely, data interception, data tampering, and DoS attacks. We counter these attacks at the RL layer, where the encryption function E encrypts data sent by sensor nodes via the AES algorithm. Rule-based access control maps only authorized users to predefined roles61. This section presents the formal proofs against the attacks to validate the claims.

Theorem 1

The proposed dew-roof-fog-cloud framework is secure against data interception attacks.

Proof

In this attack, an adversary intercepts and accesses the data transmitted over the dew and roof layers. Let D be the data collected from the sensors at the dew layer, and E(D) be the encrypted data using the AES encryption scheme with key K. Let M be the set of all possible messages an adversary can intercept, and C be the set of all possible ciphertexts an adversary can obtain. In such cases, the probability of message m getting intercepted from c is negligible and thus can be ignored. By the definition of AES encryption, we have \(E(D)=AES(D,k)\). This means that the ciphertext c is a deterministic function of the message m and the key K, which means \(c= E(m,K)\). Thus, the probability function can be written as follows \(Pr[M=m|C=c] = Pr[D=d|E(D)=c] = Pr[AES^{-1}(c,K)=d]\). Since AES is a secure encryption algorithm, the probability that an adversary can invert the ciphertext to recover the plaintext is negligible. \(\square\)

Theorem 2

The proposed dew-roof-fog-cloud framework is secure against data tampering attacks.

Proof

In this attack, we consider D and E(D) to be respectively the original and encrypted data. An adversary tampers the data, represented as \(D'\), and \(E(D')\) is the encryption of the tampered data. For the same, the adversary performs an interception attack on E(D) and modifies it to get \(E(D')\). As AES is a secure encryption algorithm, it is computationally infeasible to obtain \(D'\) from \(E(D')\) without the encryption key K. Even if the adversary has the encryption key K, it cannot modify the encrypted data E(D) to get \(E(D')\) without being detected by the integrity checks in place. Thus, the system is secure against the data tampering attack. \(\square\)

Theorem 3

The proposed dew-roof-fog-cloud framework is secure against DoS attacks.

Proof

In this case, we consider that \(R_r\) be the available resources at the roof layer and \(T_r\) be the tasks needed. Let f(T) be the resource requirement of a task T, and let Prc(T) be the processing time of a task T. Suppose an attacker tries to launch a DoS attack by flooding the system with a large number of resource requests. However, since the attacker does not have the necessary privileges (owing to role-based access control), their requests will be denied, and the attacker will not be able to consume any resources. Formally, let \(U_a\) be the set of users the attacker can impersonate, and let \(Rl_a\) be the set of roles the attacker can assume. Let \(M_a\) be the access matrix for the attacker, where \(M_{a, i,j}\) represents the set of privileges that the attacker has for role j when they assume the identity of user i. Since the attacker does not have the necessary privileges, \(M_{a, i,j}\) would be empty for all \(i \in U_a\) and \(j \in R_a\). Therefore, the attacker cannot perform tasks on the roof layer, and their resource requests will be denied. Thus, the system is secure against DoS attacks. \(\square\)

Theorem 4

The proposed dew-roof-fog-cloud framework is secure against unauthorized access attacks.

Proof

In this attack, we assume the malicious entity attempts to steal credentials by forging (impersonating) another user to gain unauthorized access and perform unwanted actions. In our approach, we have employed the RBAC to manage user privileges. We consider that let \(U_a\) be the set of unauthorized users and \(R_a\) be the set of unauthorized roles that the attacker can assume. In such case, a role-matrix \(M_a\) is the access matrix of the attacker, and let \(M_{a,i,j}\) represent the set of privileges that the attacker has for role j when assuming the identity of user i.

As RBAC is implemented, locks L on updating the matrix are allowed to only authorized users. Thus, the access matrix updation to \(M'_a\) would return empty for unauthorized users, and thus no data or resource can be manipulated in our approach. Formally, for any unauthorized user \(u \in U_a\) and unauthorized role \(r \in R_a\), \(M_{a, u, r} = \emptyset\). Therefore, the attacker cannot perform unauthorized actions within the system, ensuring the framework’s security against unauthorized access attacks. \(\square\)

Security cost analysis

In this section, we analyze the proposed framework’s communication and security computation cost. Our framework considers that the roof layer allows AES encryption, which involves a symmetric encryption/decryption process. As indicated in Srinivas et al.62, the security cost of symmetric/encryption-decryption is 0.0056 seconds. In our framework, the task t is sent from dew to roof, involving 1 round of encryption and corresponding decryption. Similarly, between roof fog, we would have 1 more round, followed by a final round between fog-cloud. Let us denote the encryption operation as \(E_{op}\) and the decryption operation as \(E_{dop}\), then the security computation cost is \(3E_{op} + 3 E_{dop}\), which is 0.0336 seconds. We compare the proposed scheme against Mukherjee et al.29, which uses asymmetric encryption for node communication. In the \(E_{op}\), an additional operation of computing modular exponentiation \(M_{exp}\) is required, which costs 0.0192 seconds. Modular inverse \(M_{inv}\) is required at the receiver, which is \(\approx\) 0.00264 seconds. Thus, the overall cost is \(E_{op} + M_{exp} + E_{dop} + M_{inv}\), which is 0.05544 seconds, and thus an improvement of 65% is obtained in security cost.

For security communication cost, we consider the message exchanges between different entities as depicted in Fig. 2. The IoMT sensor nodes share health data with a timestamp value of 32 bits. The data is then passed to the dew (Arduino Uno), which forwards it to the roof layer with a random nonce value of 32 bits. Finally, the data is passed over the IEEE 802.11 network, where the associated node attaches a digital signature verified at the receiver end. For real-time response, we use a MAC value of 160 bits. Thus, the overall communication cost is \(C_{sn} + C_{dew} + C_{roof} + C_{dew} + C_{cloud}\), where all these operations are performed, which is 32 + 32 + (32 + 160) + (32 + 160) + (32 + 160) bits, which is 640 bits. We compare the proposed communication cost with authors in25,28, and we obtain an improvement of 128 bits over these schemes, as they perform the random nonce operation and timestamp both in the communication, which amounts to 64 bits at each link. Thus, an additional overhead of 96 bits is obtained between the dew-fog and fog-cloud systems. Thus, our framework demonstrates an improvement of \(15\%\) in terms of communication cost.

Table 6 shows a comparative analysis of performed and validated security functions in the proposed schemes against existing schemes.

Table 6 A comparative analysis of security functions against related schemes.

Limitations of the proposed framework

The proposed framework DeW-IoMT outlines the integration of the dew layer in IoMT setups for critical response and actions. The roof layer ably supports the dew layer, and the roof-fog paradigm provides resource management for the dew. Only high-compute intensive tasks would go to the cloud for analysis (bulky models being trained on a high amount of data). However, there are inherent open challenges in terms of networking, resource management, and security and privacy in the scheme, which limits its effectiveness in real-world setups.

Firstly, the proposed scheme requires a high amount of changes to the existing centralized (fog/edge and cloud infrastructures) in terms of daily operations and control to integrate the dew component. This conversion might be a costly affair and would be time-consuming. Thus, it increases the overall deployment cost of the scheme in highly scalable and dynamic environments and might require extensive training operations. Moreover, the current evaluation is based on simulation, and additional real-world experiments are necessary to capture the full complexity of operational environments. To solve this problem, further research is focused on designing loosely coupled fog/edge and cloud computing paradigms, where the services are handled close to the user level. Moreover, with the rise of robust wireless communication and networking protocols, the networks can handle intermittent connectivity or frequent disconnections. Another direction is the shift towards decentralized peer-to-peer networking, which allows more effective data transmission and processing between the layers.

Secondly, we present a resource offloading approach at the roof layer to support the dew operations. Such a model might not be feasible for all types of tasks. Modern systems have a set of heterogeneous medical data, which requires a differentiated viewpoint to handle the data. Viewing the entire data as a homogeneous task component might not scale well to diverse applications in such cases. Thus, it is imperative to design task evaluation and classification strategies that allow the user to understand the task semantics and decide which task to offload to the roof layer. In future work, we plan to develop dynamic task classification algorithms, possibly using explainable AI techniques, to further refine the offloading decisions based on real-time contextual data. Thus, the future scope is to address the challenges of resource constraints and design efficient algorithms to manage tasks and resource allocation. For the same, explainable AI-based techniques are used to model resource interpretation and usage patterns. The explainable results are then fed to the machine learning model to dynamically allocate resources based on the current and the projected demand in the near future.

Finally, the scheme is heavily skewed toward the security and privacy mechanism focusing on symmetric encryption and role-based access control. However, owing to the vast nature of zero-day attacks, the vulnerability space has increased, and thus it is important to design more resilient security and privacy mechanisms. Future research should consider the integration of advanced cryptographic techniques such as homomorphic encryption and secure multi-party computation to further safeguard sensitive medical data. Thus, advanced techniques, such as homomorphic encryption and secure multi-party computation, can be performed, allowing the computations on encrypted data without decryption. These algorithms also assure the privacy of sensitive patient attributes during data sharing among multiple nodes in operation.

Conclusions

The paper presents the DeW-IoMT framework that leverages a dew layer to quickly detect critical health issues by monitoring patient pulse using an Arduino Uno and LCD-even during disconnections. A connected roof layer enhances resilience by securely offloading data, reducing reliance on fog or cloud processing for urgent alerts, while continuous cloud monitoring provides long-term analytics. Simulation and experimental evaluations show significant gains in response time, energy efficiency, and network utilization compared to cloud-centric models, with the roof layer particularly reducing latency and bolstering security.

Future work aims to integrate additional sensors (e.g., temperature and ECG) for on-dew analysis, refine task offloading strategies between fog and cloud layers, and explore improved resource management and advanced security mechanisms to enhance framework adaptability in diverse, real-world scenarios.