Abstract
Fog computing extends cloud services to the edge of the network, enabling low-latency processing and improved resource utilization, which are crucial for real-time Internet of Things (IoT) applications. However, efficient task allocation remains a significant challenge due to the dynamic and heterogeneous nature of fog environments. Traditional task scheduling methods often fail to manage uncertainty in task requirements and resource availability, leading to suboptimal performance. In this paper, we propose a novel approach, DTA-FLE (Dynamic Task Allocation in Fog computing using a Fuzzy Logic Enhanced approach), which leverages fuzzy logic to handle the inherent uncertainty in task scheduling. Our method dynamically adapts to changing network conditions, optimizing task allocation to improve efficiency, reduce latency, and enhance overall system performance. Unlike conventional approaches, DTA-FLE introduces a novel hierarchical scheduling mechanism that dynamically adapts to real-time network conditions using fuzzy logic, ensuring optimal task allocation and improved system responsiveness. Through simulations using the iFogSim framework, we demonstrate that DTA-FLE outperforms conventional techniques in terms of execution time, resource utilization, and responsiveness, making it particularly suitable for real-time IoT applications within hierarchical fog-cloud architectures.
Similar content being viewed by others
Introduction
The rapid and widespread use of Internet of Things (IoT) devices has resulted in a significant and rapid growth in the amount of data being generated and the computational requirements needed to process it. Traditional cloud computing paradigms struggle to meet the low latency and high bandwidth requirements of real-time IoT applications due to the inherent distance between the cloud data centres and end-users1. As the number of IoT devices continues to grow, centralized cloud computing faces significant challenges in handling the sheer volume of data and providing timely responses. To address these challenges, fog computing has emerged as a promising solution by extending cloud services to the edge of the network, closer to the data sources2. This decentralized approach reduces latency and improves resource utilization, making it ideal for latency-sensitive applications. Fog computing enables the processing of data at or near the source of data generation, thereby reducing the burden on centralized cloud servers and enhancing the overall system efficiency.
One of the critical challenges in fog computing is task scheduling, which involves the allocation of computational tasks to the available resources in a way that optimizes specific objectives such as minimizing latency, maximizing resource utilization, or ensuring load balancing. Formally, task scheduling can be defined as the process of assigning a set of tasks \(\:T=\{{t}_{1},{t}_{2},\dots\:,{t}_{n}\}\) to a set of resources \(\:R=\{{r}_{1},{r}_{2},\dots\:,{r}_{m}\}\) such that the overall system performance is optimized according to predefined criteria3,4. Effective task scheduling is essential for enhancing the performance and efficiency of fog computing systems, especially in dynamic and heterogeneous environments where resource availability and task requirements can vary significantly5,6.
Task scheduling is a critical bottleneck in fog-cloud environments, where tasks must be completed within predefined deadlines. Traditional task scheduling algorithms often fall short of optimizing resource utilization and meeting real-time requirements. To address this challenge, novel approaches tailored to fog-cloud environments are needed3,7. The integration of fuzzy logic into task allocation mechanisms shows promise for enhancing system performance and responsiveness. Fuzzy logic can classify tasks based on multiple parameters, enabling more informed and adaptive task scheduling decisions in dynamic environments8,9. In summary, the integration of fog computing into IoT architectures offers solutions to the challenges posed by traditional cloud-centric approaches. However, efficient resource management and task allocation remain critical concerns6,10. Novel approaches, such as integrating fuzzy logic into task allocation mechanisms, show promise for addressing these challenges and improving the performance and responsiveness of real-time IoT applications in fog-cloud environments8,11.
Traditional task scheduling methods often fall short in managing the variability and uncertainty associated with resource availability and task requirements in fog computing. To overcome these limitations, we propose a novel approach called Dynamic Task Allocation in Fog computing using a Fuzzy Logic Enhanced approach (DTA-FLE). DTA-FLE leverages the power of fuzzy logic to handle the imprecise and uncertain information inherent in fog environments. By incorporating fuzzy logic, our approach can dynamically adapt to changes in the network, providing a more robust and flexible solution for task scheduling. This is particularly crucial for real-time IoT applications that demand quick and efficient processing of data across hierarchical fog-cloud layers. In this paper, we explore the design and implementation of DTA-FLE, highlighting its benefits over traditional task scheduling techniques. We present comprehensive simulation results demonstrating that DTA-FLE not only improves task scheduling efficiency but also significantly reduces latency and enhances overall system performance. Our findings indicate that DTA-FLE is particularly suitable for real-time IoT applications, offering a reliable and effective solution for dynamic task management in fog computing environments. By leveraging fuzzy logic and hierarchical fog-cloud architectures, DTA-FLE offers a scalable and adaptive solution for real-time IoT applications, enhancing system performance, reducing latency, and optimizing resource utilization.
This paper makes several key contributions:
-
Introducing a hierarchical fog-cloud IoT model tailored to heterogeneous environments.
-
DTA-FLE approach that uses fuzzy logic for task classification and optimized scheduling within each layer.
-
Implementing the proposed approach using the iFogSim simulator and evaluating its performance based on makespan, execution cost, delay, and guarantee ratio.
The subsequent sections of this work are structured in the following manner: Sect. 2 provides an overview of relevant literature and research. Section 3 describes the hierarchical fog-cloud IoT framework. Section 4 discusses the proposed method. Section 5 reports the results of the experiments. Finally, Sect. 6 concludes the paper.
Related works
Task allocation in fog-cloud environments is a well-researched challenge due to the dynamic nature of resources and the need for real-time processing. Several studies have proposed solutions that incorporate optimization techniques, heuristic approaches, and artificial intelligence-based methods to improve task scheduling and offloading. Table 1 summarizes the reviewed works discussed in this section. Abdulazeez and Askar12 developed a three-layer fog computing architecture incorporating a fuzzy logic-based task scheduler for intelligent offloading decisions. Their approach leveraged Deep Q-Network (DQN) reinforcement learning to distribute workloads efficiently among IoT devices, fog nodes, and the cloud. This method significantly improved task execution by considering resource constraints and dynamic network conditions. To address mobility challenges in fog computing, Saif et al.13 proposed algorithms for assigning mobile IoT devices to edge gateways based on distance and bandwidth constraints. They also introduced a fuzzy logic-based task scheduling technique that optimizes security-aware task allocation, ensuring efficient processing within the fog layer. Their results showed that considering both bandwidth and proximity improves processing time compared to single-parameter approaches.
Husnain et al.14 focused on task scheduling for Vehicular Ad-hoc Networks (VANETs), using fuzzy logic to enhance response times while offloading tasks within the Internet of Vehicles. Their approach considered multiple metrics such as execution time, deadline, storage, central processing unit (CPU), and bandwidth utilization, achieving better performance in terms of task completion ratio and reduced network latency. Premalatha and Prakasam15 introduced an energy-efficient resource allocation technique (OEeRA) combining Minimal Cost Resource Allocation (MCRA) and Fault Identification and Rectification (FIR) algorithms. Their approach ensured optimal allocation of fog nodes and resource blocks while maintaining fault tolerance, leading to improved processing efficiency and reduced failures in IoT-fog systems. Javanmardi et al.16 proposed F-DTA, a fitness function designed for metaheuristic algorithms like Krill Herd Algorithm (KHA) and Particle Swarm Optimization (PSO). Using iFogSim2 simulations, their approach demonstrated superior execution time optimization compared to traditional metaheuristics. Kopras et al.17 tackled energy-efficient task allocation in fog computing by modelling it as a mixed-integer nonlinear programming problem. Their framework optimized the allocation of fog nodes at access points and compute nodes while minimizing energy consumption. Their results highlighted the advantages of distributing tasks across multiple fog nodes to balance computational loads and reduce rejection rates.
Matrouk et al.18 proposed MISSION, a mobility-aware scheduling and offloading framework. Their model integrated history-aware handovers, multi-criteria task classification, energy-aware allocation, and virtual fog nodes to handle dynamic workloads efficiently. The use of First Fitness-based Animal Migration Optimization (FFAMO) improved throughput and energy efficiency. Mokni et al.19 introduced a multi-objective optimization framework integrating scheduling, sequencing, and partitioning strategies using fuzzy inference and Multi-Agent Systems (MAS). Their model balanced user and provider needs while managing uncertainty in task scheduling decisions. Yadav and Vidyarthi20 developed a Fuzzy-Based task Offloading Technique (FBOT) for scheduling compute-intensive and data-intensive tasks across fog-cloud environments. Their method prioritized data-intensive tasks while ensuring reduced waiting times and optimal compute node selection, resulting in improved task execution efficiency. Hosseini et al.21 proposed PQFAHP, a priority-based scheduling algorithm integrating Analytical Hierarchy Process (AHP) and fuzzy logic. Their dynamic scheduling model optimized task execution by considering completion time, energy usage, RAM (Random Access Memory) availability, and deadlines, achieving superior performance in service level and response time. Shukla et al.22 combined two Multi-Criteria Decision-Making (MCDM) methods, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP, to rank fog, cloud, and end-device resources based on five key performance metrics: execution time, energy consumption, cost, resource availability, and processing speed. Their rank-based computation offloading algorithm significantly improved task distribution efficiency across hierarchical fog-cloud environments.
Several studies have explored key aspects such as resource management, optimization, and real-time decision-making. For instance, intelligent sensing and resource allocation strategies have been investigated to enhance distributed computing efficiency23. Distributed incentive mechanisms have been proposed to balance demand and communication overhead, which aligns with our approach to dynamic task placement24. Optimization techniques, including entropy-based feature selection, have been utilized for improved decision-making models25, while fuzzy-based methodologies have been employed to handle uncertainties in various applications26. Additionally, high-dimensional optimization methods have been introduced to enhance performance in complex computing environments27, and real-time decision-making frameworks have been developed for intelligent scheduling systems28. These studies highlight the growing importance of intelligent optimization techniques, reinforcing the relevance of our fuzzy logic-based task scheduling strategy in fog-cloud computing. In summary, these studies highlight the importance of adaptive and intelligent task allocation techniques in fog computing. Various approaches, including fuzzy logic, reinforcement learning, metaheuristics, and MCDM methods, have been explored to enhance efficiency, reduce latency, and optimize resource utilization. However, existing solutions often struggle with the real-time variability of fog environments. Our work builds upon these efforts by integrating fuzzy logic into a hierarchical fog-cloud model to improve dynamic task allocation and system performance in real-time IoT applications.
The fog-cloud computing framework.
The four-layer fog-cloud framework is depicted in Fig. 1.
-
1.
Sensing Layer: The sensing layer constitutes the foundational tier of the fog-cloud model, comprising a vast array of interconnected IoT devices equipped with diverse sensors and actuators. These devices are strategically deployed across various environments to capture real-time data from the physical world, including environmental parameters, human activities, and machine operations. Sensors embedded within IoT devices continuously monitor their surroundings, detecting changes in temperature, humidity, motion, light, and other relevant variables. The primary function of the sensing layer is to collect raw data from diverse sources and transmit it to higher layers for further processing and analysis29,30.
-
2.
Low-Capacity Fog Layer (LCFL): Serving as an intermediary between the sensing layer and higher-level fog or cloud resources, the LCFL plays a pivotal role in processing and filtering incoming data streams. LCFL nodes are strategically positioned within close proximity to IoT devices, enabling rapid data processing and response while minimizing latency and bandwidth usage. These nodes are equipped with modest computational resources and storage capacity, sufficient for executing lightweight data preprocessing tasks such as data aggregation, filtering, and compression. By offloading computational burdens from resource-constrained IoT devices to LCFL nodes, the overall efficiency of the IoT ecosystem is enhanced, and network congestion is alleviated. LCFL facilitates localized decision-making and real-time analytics, enabling timely responses to critical events and minimizing reliance on distant cloud resources for routine data processing tasks31,32.
-
3.
High-Capacity Fog Layer (HCFL): Above the low-capacity fog layer, the HCFL comprises more robust computing nodes capable of handling intensive data processing and analysis tasks. HCFL nodes are strategically deployed in locations with high data traffic or where more sophisticated analytics are required, serving as an intermediate tier between the LCFL and cloud layers. These nodes possess greater computational power, memory, and storage capacity compared to LCFL nodes, enabling the execution of complex algorithms, machine learning models, and predictive analytics. HCFL facilitates distributed data processing and edge intelligence, allowing for localized decision-making and real-time insight generation without the need for constant communication with distant cloud data centers. By distributing computational tasks across the LCFL and HCFL layers, the workload on cloud resources is reduced, resulting in lower latency, improved scalability, and enhanced reliability for IoT applications33,34.
-
4.
Cloud Layer (CL): At the apex of the fog-cloud framework lies the CL, encompassing traditional cloud data centers with vast computational resources and storage capacity. Cloud data centers host virtualized infrastructure, including servers, storage systems, and networking equipment, accessible via the internet. CL serves as the ultimate destination for processing, storing, and analyzing data generated by IoT devices, LCFL, and HCFL layers. Cloud resources offer unparalleled scalability, reliability, and accessibility, enabling the execution of resource-intensive tasks, large-scale data analytics, and long-term data storage. However, cloud-based processing introduces inherent latency due to data transmission over long distances, making it less suitable for real-time applications requiring immediate responses. Despite these challenges, cloud data centers play a critical role in supporting IoT deployments by providing centralized data management, scalable computing resources, and advanced analytics capabilities essential for deriving actionable insights and optimizing operational efficiency35,36.
Additionally, the system consists of four pivotal modules:
-
Task Request Collector: Operating within the LCFL, the task request collector module serves as the primary conduit for aggregating task requests originating from a diverse array of interconnected smart devices. Its core function involves the seamless collection and transmission of these tasks to the subsequent processing stage, the task classifier module. Each task is accompanied by a unique identification number (ID) and crucial parameters such as Task Length (TLength) measured in Million Instructions (MI), Task data Size (TSize) in kiloBytes (KB), and the maximum Response Time (maxRT) in seconds, ensuring efficient task management and allocation37.
-
Task Classifier: Nestled within the LCFL, the Task Classifier module assumes a pivotal role in orchestrating the distribution of tasks across multiple layers of the system architecture. Leveraging an advanced fuzzy inference system, it intelligently allocates tasks among the LCFL, HCFL, and cloud layers based on a comprehensive set of parameters38.
-
Task Scheduler: With instances deployed across LCFL, HCFL, and CL, the Task Scheduler module stands as the cornerstone of efficient task scheduling and allocation within the system. Task queues, received from the task classifier module, are meticulously processed and mapped to available VMs using the innovative Dynamic Task Allocation (DTA) technique39.
-
Resource Monitoring: The Resource Monitoring module assumes responsibility for the continuous surveillance and evaluation of available resources within LCFL, HCFL, and CL. Operating on a periodic basis, it collects and analyzes resource statuses, providing invaluable insights to the task scheduler40.
Proposed method
This section proposes a decision procedure based on fuzzy logic to distribute jobs between the fog and cloud layers. It also presents a real-time task scheduling mechanism to select the most suitable virtual machine for task execution. There are two stages involved in implementing the suggested method, Dynamic Task Allocation in Fog Computing Using Fuzzy Logic Enhanced (DTA-FLE). First, using a task categorization technique based on fuzzy logic, the tasks are assigned to LCFL, HCFL, and CL. Second, the DTA technique is used to plan the tasks inside each layer. Algorithm 1 provides pseudo code of the proposed method. Meanwhile, Table 2 shows a summary of the notations used in this study.
Fuzzy logic-based task scheduling mechanism
Based on the resource requirements of each task, time constraints, the maximum available resources in the fog layer, and the latency between the fog and cloud layers, the fuzzy decision-making algorithm determines whether a given task should be processed in the fog or offloaded to the cloud. If task ti is selected to be executed in the fog layer, the algorithm computes the normalized minimum resource requirements for CPU, storage, and bandwidth. These normalized values help evaluate the feasibility of executing the task locally in the fog. Specifically, if task \(\:{t}_{i}\) is assigned to the fog, the minimum values of the task’s resource consumption needs, in terms of CPU rate \(\:{c}_{\text{m}\text{i}\text{n}}\left({t}_{i}\right)\), storage \(\:{s}_{\text{m}\text{i}\text{n}}\left({t}_{i}\right)\), and bandwidth \(\:{b}_{\text{m}\text{i}\text{n}}\left({t}_{i}\right)\), are as follows:
where \(\:ct\left({t}_{i}\right)\) represents the required CPU cycles (processing capacity) for task \(\:{t}_{i}\), \(\:st\left({t}_{i}\right)\) represents the required storage space for task \(\:{t}_{i}\), and \(\:bt\left({t}_{i}\right)\) represents the required bandwidth for data transmission associated with task \(\:{t}_{i}\). Also, \(\:c{c}_{\text{m}\text{a}\text{x}}v={\text{m}\text{a}\text{x}}_{{v}_{j}\in\:V}\:cc\left({v}_{j}\right)\:\)represents the maximum CPU rate that the fog layer offers. Similarly, the maximum bandwidth and storage capacity in the fog layer are denoted by \(\:b{c}_{\text{m}\text{a}\text{x}}v={\text{m}\text{a}\text{x}}_{{v}_{j}\in\:V}\:bc\left({v}_{j}\right)\) and \(\:s{c}_{\text{m}\text{a}\text{x}}v={\text{m}\text{a}\text{x}}_{{v}_{j}\in\:V}\:sc\left({v}_{j}\right)\), respectively.
Fuzzy sets for input and output parameters are used in fuzzy logic systems to represent uncertain or imprecise information. For input parameters, a fuzzy set assigns a degree of membership between 0 and 1 to each possible value, indicating how strongly it belongs to the set. Similarly, for output parameters, fuzzy sets help determine the degree to which a particular outcome is associated with the given input conditions. This approach allows for more flexible and human-like reasoning, especially in complex or ambiguous environments where traditional binary logic falls short. The fuzzy sets for the input and output parameters are described in Table 3. Meanwhile, Fig. 2 shows the membership functions of the input and output fuzzy set variables.
Using Eqs. (2–6), the membership values for the fuzzy input parameters are computed. These equations define triangular and trapezoidal membership functions, which are common in fuzzy logic systems for representing uncertainty and linguistic terms (e.g., low, medium, high). In all equations, \(\:\mu\:\left(x\right)\) denotes the membership value, which is a real number in the interval [0,1]. It represents the degree to which the input \(\:x\) belongs to a particular fuzzy set. Also, x represents the crisp input value, i.e., the actual numerical value of a parameter (such as latency, cost, etc.).
Using experimental data, we have created a fuzzy logic rule base of 243 rules based on five input and one output variables. Table 4 presents a sample of the rule base41. Because of its ease of use, the Mamdani Fuzzy Inference System (MFIS) has been adopted42,43. The enter of Gravity (CoG) defuzzification method is used to convert the output fuzzy sets into a single crisp value once the fuzzy output has been produced in accordance with the criteria.
Let’s consider a practical example where a task requires scheduling in a fog computing environment. Suppose we have a task \(\:{t}_{i}\) that needs to be executed, and it requires 50 CPU cycles, 100 MB of storage, and 20 Mbps of bandwidth. The fog layer provides VMs with varying available resources, and we need to decide whether this task should be processed in the fog or offloaded to the cloud. First, using the fuzzy logic-based scheduling mechanism, we compute the normalized minimum resource requirements for the task based on the available resources in the fog layer. Let’s assume the maximum available CPU rate in the fog layer is 200 CPU cycles per VM, the maximum storage is 500 MB, and the maximum bandwidth is 100 Mbps. Using the normalization formulas for CPU, storage, and bandwidth, we calculate:
Next, we evaluate the fuzzy membership values for the task’s resource utilization (CPU, storage, bandwidth), deadline, and network latency using fuzzy sets defined in Table 3. For instance, if the task’s latency is low, the deadline is hard, and resource utilizations are low, we determine that the task belongs more strongly to the “Fog Layer” fuzzy set based on the membership functions and rule base provided. Finally, after applying the fuzzy decision-making algorithm, the task is assigned to the fog layer as it meets the resource requirements for local execution. The fuzzy inference system uses the computed membership values and rules to decide the most suitable layer (fog or cloud) for task execution based on real-time conditions, such as latency, resource utilization, and deadline constraints.
Task scheduling technique
Tasks that arrive at the fog broker are queued \(\:Q\) and handled by the suggested Fuzzy Logic-based Real-time Task Scheduling Algorithm. The task is sent to the cloud if the fog layer is unable to supply the resources needed for it. If not, the min-max normalization is used to calculate and normalize the task’s minimum resource consumption needs. The Fuzzy Logic-based Decision Algorithm is employed to determine if a job should be processed on the fog or forwarded to the cloud, taking into account the task’s resource consumption, deadline, and network latency. The job is added to a fog queue \(\:FQ\) if it is assigned to the fog layer. The deadlines of each work are arranged in ascending order within the fog queue. This ensures that more tasks are finished by the deadline by processing the urgent ones with strict deadlines first. Next, by examining each virtual machine’s eligibility, a collection of \(\:{V}^{{\prime\:}}\)eligible virtual machines (VMs) are generated for every task. Among the VMs \(\:{v}^{{\prime\:}}\) that meet the eligibility requirements, the one with the least amount of load is chosen, and it is given the assignment. To assess the performance and efficiency of task allocation in a fog computing environment, it’s essential to evaluate how well a VM utilizes its allocated resources. The total resource utilization of a virtual machine \(\:{v}_{j}\) is calculated using the average of its individual resource utilizations—namely CPU, storage, and bandwidth:
where \(\:ru\left({v}_{j}\right)\) represents the total resource utilization of VM \(\:{v}_{j}\), \(\:cu\left({v}_{j}\right)\) represents the CPU utilization of VM \(\:{v}_{j}\), \(\:su\left({v}_{j}\right)\) represents the storage utilization of VM \(\:{v}_{j}\), and \(\:bu\left({v}_{j}\right)\) represents the bandwidth utilization of VM \(\:{v}_{j}\). Meanwhile, \(\:u\left({v}_{j}\right)=\frac{\sum\:_{i=1}^{N}\:ct\left({t}_{i}\right)}{cc\left({v}_{j}\right)}\), \(\:su\left({v}_{j}\right)=\frac{\sum\:_{i=1}^{N}\:st\left({t}_{i}\right)}{sc\left({v}_{j}\right)}\) and \(\:bu\left({v}_{j}\right)=\frac{\sum\:_{i=1}^{N}\:bt\left({t}_{i}\right)}{bc\left({v}_{j}\right)}\). Here, \(\:cc\left({v}_{j}\right)\), \(\:sc\left({v}_{j}\right)\), and \(\:bc\left({v}_{j}\right)\) represent total available CPU capacity, total storage capacity, and total available bandwidth of VM \(\:{v}_{j}\), respectively. Also, \(\:N\) represents the total number of tasks.
We have three task queues after using the fuzzy-logic-based task scheduling technique. Every queue from HCFL, CL, and LCFL for every layer. DTA is carried out using a series of procedures that are described as follows, taking into account the queue of categorized tasks in the layer and the number of VMs in each processing server on that layer:
-
(1)
Calculate the Expected Processing Time (EPT) matrix for each task (\(\:i\)) on all VMs in each Processing Server (PS). As per Eq. (8), the size of the EPT matrix is determined by multiplying the number of tasks with the number of servers and the matching number of VMs. Depending on the VM’s processing power, a task’s EPT varies from one to the next. This equation calculates the expected time it would take for each task to execute on each VM across all processing servers. It allows the task scheduler to evaluate all potential task-to-VM assignments in order to choose the most efficient option based on processing time.
$$\:EPT(i,k,j)=\frac{TLength\left(i\right)}{VM\_MIPS\left(j\right)}$$(8)where \(\:i\) denotes the current task index, \(\:k\) denotes the current PS index, and \(\:j\) denotes the VM index within that \(\:k\)-th PS. Also, \(\:EPT(i,k,j)\) denotes the EPT of \(\:{t}_{i}\) on \(\:{v}_{j}\) that resides within \(\:{p}_{k}\).
-
(2)
Assigning a task to the VM with the freest time and the lowest EPT. The task with the shortest maxRT is assigned first. Suppose task \(\:{t}_{i}\) requires scheduling. The objective is to assign it to a VM in a manner that minimizes its completion time. To achieve this, the Expected Time to Completion (ETC) is calculated for all possible combinations of processing sites and VMs. The combination with the lowest ETC value—indicating the earliest expected finish time—is then selected for task assignment. Equation (9) calculates the ETC for assigning task \(\:{t}_{i}\) to VM \(\:{v}_{j}\) in a processing site \(\:{p}_{k}\).
$$\:ETC\left(k,j\right)=wt\left(k,j\right)+EPT\left(i,k,j\right)$$(9)where \(\:EPT(i,k,j)\) is the task’s estimated processing time and \(\:wt(k,j)\) is the processing time for the VMs availability. Meanwhile, \(\:ETC\left(k,j\right)\) represents the total expected time to complete task \(\:{t}_{i}\) on VM \(\:{v}_{j}\) in processing site \(\:{p}_{k}\), considering both the waiting time for the VM and the estimated processing time for the task.
-
(3)
Using Eq. (10), calculate the Total Expected Time to Complete (TETC) for task \(\:{t}_{i}\) on VM \(\:{v}_{j}\) in \(\:{p}_{k}\):
In Eq. (10), the Total Expected Time to Complete (TETC) for a task \(\:{t}_{i}\) on VM \(\:{v}_{j}\) in \(\:{p}_{k}\) is calculated as the sum of the ETC for the task and the transmission time \(\:{T}_{rans}^{s}\left(i\right)\). The transmission time accounts for the time required to transmit the task data to the processing unit, which is computed in Eq. (11)7.
$$\:TETC\left(k,j\right)=ETC\left(k,j\right)+{T}_{rans}^{s}\left(i\right)$$(10)$$\:{T}_{rans}^{s}\left(i\right)=\frac{TSize\left(i\right)}{BW}$$(11)where \(\:BW\) is the task classifier’s and the processing server’s link bandwidth, and \(\:TSize\left(i\right)\) refers to the size of the task data for task \(\:{t}_{i}\).
Practical applications
In the proposed DTA-FLE method, the system classifies tasks based on fuzzy logic and then schedules them in the most suitable VM within the fog or cloud layers. For instance, consider a task \(\:{t}_{i}\) with certain resource requirements, such as CPU, storage, and bandwidth. First, the fuzzy logic-based task classification evaluates these requirements and assigns the task to one of three categories: LCFL, HCFL, or CL. After categorization, the task enters the appropriate queue. Next, the DTA algorithm computes the EPT for each VM in each layer based on the task’s resource needs and the VM’s processing capacity. The task is then assigned to the VM with the least expected time to complete the task, factoring in both waiting time and processing time. Additionally, the method accounts for transmission time, which is the time needed to send task data across the network, using the bandwidth between the task classifier and the processing server. This comprehensive scheduling approach optimizes resource utilization, minimizes latency, and ensures that tasks are completed within their deadlines, thereby improving overall system performance.
In practical applications, the proposed DTA-FLE method can be particularly useful in scenarios where IoT devices are deployed across a wide geographic area, such as in smart cities or industrial automation. For example, in a smart city scenario, traffic monitoring tasks can be categorized based on their urgency and resource needs. High-priority tasks, such as detecting accidents or traffic jams, may be classified into the HCFL for immediate processing at nearby fog nodes, ensuring low latency. On the other hand, tasks like real-time weather updates, which have less stringent time constraints, may be assigned to the CL for processing, utilizing more computational power. Additionally, tasks like video surveillance may require substantial bandwidth and storage, so they are first evaluated by fuzzy logic and assigned to the most suitable VM based on available resources. This approach allows the system to dynamically balance the load between fog and cloud layers, optimizing resource allocation, minimizing latency, and ensuring that critical tasks are processed on time. Such a method is also applicable in industrial IoT, where machines and sensors continuously send data for analysis, and real-time task scheduling ensures smooth operation without delays or resource bottlenecks.
Experimental evaluation
This section estimates the performance metrics and provides the performance evaluation of the suggested DTA-FLE technique. Additionally, we evaluate the DTA-FLE approach’s performance in comparison to that of OEeRA15, FBOT20, and DTA. We have already deployed OEeRA, FBOT, and DTA in the identical DTA-FLE environment configuration to allow for a fair comparison.
According to their order of arrival, tasks are carried out on the fog-cloud servers via the OEeRA technique, with the first task to arrive being served first. Servers in the LCFL come first, followed by servers in the HCFL, and lastly, servers on the cloud layer. Tasks on the fog-cloud servers are carried out by the FBOT technique according to their lengths. According to length, the tasks are arranged in ascending order. The tasks are subsequently assigned to servers in LCFL, HCFL, and Cloud, in that order, much like in OEeRA. In DTA, tasks are arranged in ascending order according to their maximum response times, and only cloud servers are scheduled. The following metrics are calculated and explained in order to assess the suggested solution in line with OEeRA, FBOT, DTA, and DTA-FLE: makespan, energy consumption, cost of processing, guarantee ratio, and delay.
The suggested approaches are implemented and examined using the iFogSim simulator30. It is an open-source simulation toolkit based on Java that enables the evaluation of resource management and scheduling strategies across fog and cloud resources. The jFuzzylogic open-source Java library is used to link the fuzzy logic platform with iFogSim30. In this Task, the simulation parameters are as follows: The task length is set according to a random number between [1000 MI, 5000 MI]. The number of tasks is set between 100 and 700. Data size is set according to a random number between [10 KB, 500 KB]. MaxRT is set according to a random number between [3 s, 15 s]. The processing cost per time unit in the cloud is equal to 1$. The simulation parameters for tasks, servers, and VMs are listed in Table 5. We assume that each fog server has four VMs and each cloud server has ten VMs with varying computational capacity.
We evaluated the impact of increasing task counts from 100 to 700 on performance metrics, averaging outcomes over ten simulation runs.
Makespan
As shown in Fig. 3, the makespan increases with the number of tasks. DTA-FLE consistently shows a lower makespan compared to DTA, FBOT, and OEeRA. For instance, at 500 tasks, the makespan for DTA-FLE is 8.35 s, compared to 11.7 s for DTA, 44.41 s for FBOT, and 46.62 s for OEeRA. This means DTA-FLE executes tasks 81% faster than FBOT and OEeRA and 29% faster than DTA. DTA-FLE achieves this by prioritizing urgent tasks with shorter maximum response times, allocating them to hierarchical fog levels, and directing less urgent tasks to the cloud layer. This approach results in fewer tasks assigned to fog devices compared to FBOT and OEeRA, where more tasks are allocated to fog than to the cloud.
Energy consumption
Energy consumption is a crucial factor in fog computing environments, as resource-constrained edge devices must operate efficiently while processing tasks. To evaluate the effectiveness of our proposed approach, we compared its energy consumption against three existing methods: DTA, FBOT, and OEeRA. Energy consumption in our study is calculated based on the total power usage of fog nodes and cloud servers during task execution. The power model considers both computation energy (\(\:{E}_{comp}\)) and communication energy (\(\:{E}_{comm}\)), formulated as follows:
where \(\:{P}_{CPU}\) is the power consumption of processing units, \(\:{T}_{exec}\) is the task execution time, \(\:{P}_{tx}\) and \(\:{P}_{rx}\) are the transmission and reception power, respectively, and \(\:{T}_{comm}\) is the communication time. Also, \(\:n\) and \(\:m\) are the number of tasks and the number of computing resources, respectively.
As depicted in Fig. 4, the proposed DTA-FLE approach achieves lower energy consumption across all task loads compared to DTA, FBOT, and OEeRA. This improvement is attributed to the intelligent task allocation mechanism of DTA-FLE, which minimizes unnecessary task migrations and optimizes resource utilization within the fog layer.
Cost of processing
Figure 5 indicates that processing costs increase with the number of tasks. DTA has the highest costs due to more tasks being assigned to cloud servers. DTA-FLE also has higher costs than OEeRA and FBOT, as it allocates more tasks to cloud servers.
Delay
Figure 6 shows that delays increase for OEeRA and FBOT as task numbers grow. Using a logarithmic scale, we observed zero delays for DTA-FLE and DTA up to 500 tasks. Beyond this point, delays start to rise. At 700 tasks, delays are 24 s for DTA-FLE, approximately 132 s for DTA, 35 min for FBOT, and 50 min for OEeRA.
Runtime
In a separate experiment, we have presented the results of the execution time to showcase the time complexity and the GPU memory results to illustrate the memory difficulty, in comparison to alternative methods. The results are displayed in Table 6. Execution time is a suitable metric for illustrating time complexity, while GPU memory is a suitable metric for illustrating memory complexity. The DTA-FLE method described in this study demonstrates improved performance in comparison to OEeRA, FBOT, and DTA. Our scheme demonstrates superior performance in terms of speedier execution times and more efficient utilization of GPU memory, underscoring its effectiveness in real-world applications. Our findings indicate that while DTA-FLE introduces additional computational overhead due to fuzzy logic-based decision-making, it remains efficient in dynamic fog environments by reducing latency and improving resource utilization. This ensures that the benefits of adaptive task scheduling outweigh the computational cost, making DTA-FLE a viable solution for real-time IoT applications.
A summary of the results across various evaluation metrics is presented in Table 7. Results are reported based on averages for different numbers of tasks. This table compares the performance of the proposed DTA-FLE approach with recent state-of-the-art methods, highlighting key metrics such as makespan, energy consumption, cost of processing, guarantee ratio, delay, and execution time. The comparison demonstrates that DTA-FLE outperforms existing techniques, showcasing its superior performance in task allocation efficiency, resource utilization, and system responsiveness within fog computing environments. The superiority of the proposed DTA-FLE method can be attributed to its ability to dynamically adapt to the uncertainties and variability inherent in fog computing environments. By leveraging fuzzy logic, DTA-FLE effectively classifies tasks based on multiple parameters, allowing for more informed and adaptive scheduling decisions. Furthermore, the proposed method demonstrates a balance between minimizing energy consumption and reducing delays, making it particularly suitable for real-time IoT applications.
The observed performance improvements of DTA-FLE over other task allocation methods like OEeRA, FBOT, and DTA can be attributed to its advanced and intelligent task allocation strategy, which is designed to optimize how tasks are distributed between the fog and cloud layers. One key advantage of DTA-FLE is its ability to prioritize time-sensitive tasks by assigning them to the fog layer, which has lower latency compared to the cloud layer. This hierarchical scheduling mechanism not only ensures that urgent tasks are executed more quickly, but it also prevents delays caused by network congestion, which can occur when too many tasks are offloaded to the cloud. In contrast, OEeRA and FBOT allocate tasks based on simpler criteria such as the order of arrival or the length of the tasks, which do not take into account the urgency of the tasks or the available resource capabilities of each layer. This lack of intelligent scheduling in OEeRA and FBOT results in higher latency and less efficient use of resources, as the allocation strategy does not adapt to real-time needs. The significant reduction in makespan (the total time required to complete all tasks) in DTA-FLE is mainly due to the dynamic and strategic allocation of tasks. By carefully balancing the execution of tasks, DTA-FLE minimizes resource bottlenecks and task delays. For example, it ensures that low-priority tasks that do not require immediate attention are directed to the cloud, where there may be more processing power available but at the cost of higher latency. This careful distribution of tasks helps to prevent congestion at the fog nodes and reduces the need for task migrations, which ultimately contributes to lower energy consumption. This aspect of DTA-FLE is particularly important because unnecessary migrations can waste energy and increase execution time.
When compared to DTA, which only utilizes cloud servers, DTA-FLE demonstrates a significant advantage in execution time. DTA, by relying solely on cloud resources, experiences higher latency and costs because of the longer distances involved in data transmission between the fog and cloud layers. In contrast, DTA-FLE makes better use of the fog layer, where tasks can be processed with lower latency, reducing the overall time taken to complete tasks. However, it is important to note that DTA-FLE may incur slightly higher processing costs compared to OEeRA and FBOT due to its more complex scheduling system. This cost increase is a trade-off for faster execution, as DTA-FLE occasionally offloads tasks to the cloud when necessary to ensure that they are completed on time. Moreover, the delay improvements achieved by DTA-FLE are closely linked to its priority-aware scheduling system. This system prevents high-priority tasks from being delayed due to queuing at overloaded fog nodes, which can be a significant problem in systems like OEeRA and FBOT, where task allocation is not optimized. DTA-FLE’s ability to prioritize tasks ensures that critical tasks are processed first, with minimal delay, leading to a better overall performance, especially as the number of tasks increases. As a result, DTA-FLE is highly effective in environments where rapid task execution is crucial. Another area where DTA-FLE outperforms the other methods is in computational efficiency. The fuzzy logic-based prioritization and adaptive workload balancing used in DTA-FLE allow it to optimize the use of computational resources, which in turn reduces the GPU memory usage and execution time. This is a significant advantage over OEeRA and FBOT, which do not have the flexibility to adjust to changing workload demands, and DTA, which does not leverage the fog layer to its full potential.
A practical example of this study could be found in a smart city environment, where the IoT devices, such as traffic sensors, surveillance cameras, and smart meters, generate a massive amount of data. These devices require real-time data processing to ensure optimal decision-making and smooth operations, such as managing traffic flow, monitoring public safety, or controlling energy consumption. In this scenario, fog computing can extend cloud services closer to the edge of the network, processing data locally on edge devices or fog nodes, thereby reducing latency and bandwidth usage. The proposed approach can help efficiently allocate computational tasks among various fog nodes (e.g., traffic lights, surveillance cameras) based on their criticality and resource availability. This dynamic allocation of tasks ensures that real-time operations, such as adjusting traffic light timing or detecting security breaches, can be executed without delay, improving overall system responsiveness and efficiency. also, this method would be particularly useful for applications requiring minimal delays and high availability, such as autonomous vehicles, smart transportation systems, and environmental monitoring, where efficient task scheduling is crucial for real-time decision-making.
Recent advancements in intelligent decision-making and optimization have been significantly influenced by large-scale data-driven models. For instance, large language models (LLMs) have been employed to address cold-start problems in recommendation systems, enabling more efficient and adaptive resource allocation in dynamic environments like fog-cloud computing44. Additionally, anomaly detection techniques, such as those used for fraudulent taxi trip identification, highlight the importance of trajectory-based analysis and real-time decision-making, which can be adapted to enhance scheduling strategies in fog computing by detecting irregularities in task execution patterns45. These approaches provide valuable insights into improving the efficiency, adaptability, and security of distributed computing frameworks.
Conclusion
This paper presents a novel approach for task distribution and scheduling within a cloud-fog model. Our method, comprising two distinct phases, leverages fuzzy logic for task classification and an efficient scheduling technique for optimal resource allocation. In the first phase, tasks are classified and distributed among three layers: LCFL, HCFL, and CL, based on their criticality and specific requirements. The second phase involves scheduling these tasks within each layer to ensure maximum response time adherence, and minimize delays, thereby increasing the guarantee ratio. Our approach demonstrates superior performance compared to traditional techniques such as DTA, OEeRA, and FBOT. The results indicate that our method significantly enhances task scheduling efficiency, resulting in improved makespan, lower execution costs and energy, and reduced delays.
Key findings of the study highlight the effectiveness of DTA-FLE in improving task scheduling efficiency, particularly for real-time IoT applications in fog-cloud architectures. However, there are limitations to our study that should be considered. One such limitation is the computational complexity of the fuzzy logic-based approach, which may present challenges when scaling to larger and more dynamic IoT environments. Additionally, while the approach is effective in simulation, real-world deployment may reveal additional practical challenges, such as varying network conditions and resource heterogeneity. Future research could explore adaptive, self-learning algorithms to further enhance the system’s responsiveness, as well as the integration of advanced machine learning techniques to improve task prediction and resource allocation efficiency. Moreover, expanding the model to support larger IoT networks, including edge computing environments, and addressing the security and privacy concerns of task offloading in fog-cloud systems will be crucial. Lastly, validating the proposed approach in real-world IoT scenarios would offer valuable insights for practical deployment and refinement of the model. By addressing these areas, we can further enhance the robustness and applicability of task scheduling solutions in fog-cloud IoT architectures.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
Ghadi, Y. Y. et al. Machine learning solutions for the security of wireless sensor networks: A review. IEEE Access. 12, 12699–12719 (2024).
Sun, G. et al. Cost-efficient service function chain orchestration for low-latency applications in NFV networks. IEEE Syst. J. 13 (4), 3877–3888 (2018).
Sun, G., Liao, D., Zhao, D., Xu, Z. & Yu, H. Live migration for multiple correlated virtual machines in cloud-based data centers. IEEE Trans. Serv. Comput. 11 (2), 279–291 (2015).
Hosseinzadeh, M. et al. Task scheduling mechanisms for fog computing: a systematic survey. IEEE Access. 11, 50994–51017 (2023).
Mazhar, T., Shahzad, T., Rehman, A. U. & Hamam, H. Integration of smart grid with industry 5.0: applications, challenges and solutions. Measurement: Energy. 5, 100031 (2024).
Vemireddy, S. & Rout, R. R. Fuzzy reinforcement learning for energy efficient task offloading in vehicular fog computing. Comput. Netw. 199, 108463 (2021).
Ashraf, M. et al. Microservice application scheduling in Multi-Tiered Fog-Computing-Enabled IoT. Sensors 23 (16), 7142 (2023).
Etemadi, M., Ghobaei-Arani, M. & Shahidinejad, A. A cost-efficient auto-scaling mechanism for IoT applications in fog computing environment: a deep learning-based approach. Cluster Comput. 24 (4), 3277–3292 (2021).
Liu, C., Wang, J., Zhou, L. & Rezaeipanah, A. Solving the multi-objective problem of IoT service placement in fog computing using cuckoo search algorithm. Neural Process. Lett. 54 (3), 1823–1854 (2022).
Tarahomi, M., Izadi, M. & Ghobaei-Arani, M. An efficient power-aware VM allocation mechanism in cloud data centers: a micro genetic-based approach. Cluster Comput. 24 (2), 919–934 (2021).
Mazhar, T. et al. Quality of service (QoS) performance analysis in a traffic engineering model for next-generation wireless sensor networks. Symmetry 15 (2), 513 (2023).
Abdulazeez, D. H. & Askar, S. K. A Novel Offloading Mechanism Leveraging Fuzzy Logic and Deep Reinforcement Learning to Improve IoT Application Performance in a Three-Layer Architecture Within the Fog-Cloud Environment. IEEE Access. (2024).
Saif, F. A., Latip, R., Hanapi, Z. M. & Shafinah, K. Multi-objective grey Wolf optimizer algorithm for task scheduling in cloud-fog computing. IEEE Access. 11, 20635–20646 (2023).
Husnain, G., Anwar, S. & Shahzad, F. An enhanced AI-enabled routing optimization algorithm for internet of vehicles (IoV). Wireless Pers. Commun. 130 (4), 2623–2643 (2023).
Premalatha, B. & Prakasam, P. Optimal Energy-efficient resource allocation and fault tolerance scheme for task offloading in IoT-FoG computing networks. Comput. Netw. 238, 110080 (2024).
Javanmardi, S., Sakellari, G., Shojafar, M. & Caruso, A. Why it does not work? Metaheuristic task allocation approaches in fog-enabled internet of drones. Simulation Modelling Pract. Theory, 102913. (2024).
Kopras, B., Idzikowski, F., Bossy, B., Kryszkiewicz, P. & Bogucka, H. Communication and computing task allocation for energy-efficient fog networks. Sensors 23 (2), 997 (2023).
Matrouk, K. M. & Matrouk, A. D. Mobility aware-task scheduling and virtual fog for offloading in IoT-fog-cloud environment. Wireless Pers. Commun. 130 (2), 801–836 (2023).
Mokni, M., Yassa, S., Hajlaoui, J. E., Omri, M. N. & Chelouah, R. Multi-objective fuzzy approach to scheduling and offloading workflow tasks in Fog–Cloud computing. Simul. Model. Pract. Theory. 123, 102687 (2023).
Yadav, P. & Vidyarthi, D. P. An efficient fuzzy-based task offloading in edge‐fog‐cloud architecture. Concurrency Computation: Pract. Experience, 35(26), e7843. (2023).
Hosseini, E., Nickray, M. & Ghanbari, S. Optimized task scheduling for cost-latency trade-off in mobile fog computing using fuzzy analytical hierarchy process. Comput. Netw. 206, 108752 (2022).
Shukla, P., Pandey, S., Hatwar, P. & Pant, A. FAT-ETO: Fuzzy-AHP-TOPSIS-Based efficient task offloading algorithm for scientific workflows in heterogeneous fog–cloud environment. Proc. Natl. Acad. Sci. India Sect. A: Phys. Sci. 93 (2), 339–353 (2023).
Gong, Y., Yao, H. & Nallanathan, A. Intelligent sensing, communication, computation and caching for satellite-ground integrated networks. IEEE Netw. 38 (4), 9–16 (2024).
Fu, Y. et al. A distributed incentive mechanism to balance demand and communication overhead for multiple federated learning tasks in IoV. IEEE Internet Things J. https://doi.org/10.1109/JIOT.2024.3510561 (2024).
Lin, L., Ma, X., Chen, C., Xu, J. & Huang, N. Imbalanced industrial load identification based on optimized catboost with entropy features. J. Electr. Eng. Technol. 19 (8), 4817–4832 (2024).
Dai, N. et al. Blind super-resolution network based on local fuzzy discriminative loss for fabric data augmentation. J. Eng. Fibers Fabr. 20, 15589250241313158 (2025).
Zhao, H. et al. Supervised kernel principal component analysis-polynomial chaos-Kriging for high-dimensional surrogate modelling and optimization. Knowl. Based Syst. 305, 112617 (2024).
Rong, Y., Xu, Z., Liu, J., Liu, H., Ding, J., Liu, X., … Gao, J. (2022). Du-bus: a realtime bus waiting time estimation system based on multi-source data. IEEE Transactions on Intelligent Transportation Systems, 23(12), 24524–24539.
Sun, G., Xu, Z., Yu, H. & Chang, V. Dynamic network function provisioning to enable network in box for industrial applications. IEEE Trans. Industr. Inf. 17 (10), 7155–7164 (2020).
Peng, X., Song, S., Zhang, X., Dong, M. & Ota, K. Task offloading for IoAV under extreme weather conditions using dynamic price driven double broad reinforcement learning. IEEE Internet Things J. 11 (10), 17021–17033 (2024).
Zhong, Y., Chen, L., Dan, C. & Rezaeipanah, A. A systematic survey of data mining and big data analysis in internet of things. J. Supercomputing. 78 (17), 18405–18453 (2022).
Li, Q. et al. Cloud-edge collaboration for industrial internet of things: scalable neurocomputing and rolling-horizon optimization. IEEE Internet Things J. https://doi.org/10.1109/JIOT.2025.3542428 (2025).
Zhang, B. et al. Integrated heterogeneous graph and reinforcement learning enabled efficient scheduling for surface Mount technology workshop. Inf. Sci. 708, 122023 (2025).
Zhang, Y., Zhang, F., Tong, S. & Rezaeipanah, A. A dynamic planning model for deploying service functions chain in fog-cloud computing. J. King Saud University-Computer Inform. Sci. 34 (10), 7948–7960 (2022).
Huang, W., Li, T., Cao, Y., Lyu, Z., Liang, Y., Yu, L., … Li, Y. (2023, October).Safe-NORA: Safe reinforcement learning-based mobile network resource allocation for diverse user demands. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (pp. 885–894).
Wei, M., Yang, S., Wu, W. & Sun, B. A multi-objective fuzzy optimization model for multi-type aircraft flight scheduling problem. Transport 39 (4), 313–322 (2024).
Zavieh, H., Javadpour, A., Ja’fari, F., Sangaiah, A. K. & Słowik, A. Enhanced efficiency in fog computing: A fuzzy data-driven machine selection strategy. Int. J. Fuzzy Syst. 26 (1), 368–389 (2024).
Zhang, X. et al. EALLR: Energy-aware low-latency routing data driven model in mobile edge computing. IEEE Trans. Consum. Electron. https://doi.org/10.1109/TCE.2024.3507158 (2024).
Zhou, M. T., Ren, T. F., Dai, Z. M. & Feng, X. Y. Task scheduling and resource balancing of fog computing in smart factory. Mob. Networks Appl. 28 (1), 19–30 (2023).
Ma, X., Zong, K. & Rezaeipanah, A. Auto-scaling and computation offloading in edge/cloud computing: a fuzzy Q-learning-based approach. Wireless Netw. 30 (2), 637–648 (2024).
Ali, H. S., Rout, R. R., Parimi, P. & Das, S. K. Real-time task scheduling in fog-cloud computing framework for iot applications: a fuzzy logic based approach. In 2021 International Conference on COMmunication Systems & NETworkS (COMSNETS) (pp. 556–564). IEEE. (2021), January.
Guo, S., Peters, L. & Surmann, H. Design and application of an analog fuzzy logic controller. IEEE Trans. Fuzzy Syst. 4 (4), 429–438 (1996).
Sakti, I. Methodology of fuzzy logic with mamdani fuzzy models applied to the microcontroller. In 2014 The 1st International Conference on Information Technology, Computer, and Electrical Engineering (pp. 93–98). IEEE. (2014), November.
Huang, F., Bei, Y., Yang, Z., Jiang, J., Chen, H., Shen, Q., … Yu, P. S. (2024). Large Language Model Simulator for Cold-Start Recommendation. WSDM ‘25: Proceedings of the Eighteenth ACM International Conference on Web Search and Data Mining, 261–270. doi: 10.1145/3701551.3703546.
Ding, Y. et al. FraudTrip: taxi fraudulent trip detection from corresponding trajectories. IEEE Internet Things J. 8 (16), 12505–12517 (2020).
Author information
Authors and Affiliations
Contributions
Wanying Jin provided critical insights into challenges and applications, contributed to the discussion section, and drafted significant portions of the revised manuscript; Amin Rezaeipanah conceptualized the study, designed the survey framework, analyzed existing techniques, supervised the research process, and reviewed the final manuscript for clarity and coherence.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Jin, W., Rezaeipanah, A. Dynamic task allocation in fog computing using enhanced fuzzy logic approaches. Sci Rep 15, 18513 (2025). https://doi.org/10.1038/s41598-025-03621-4
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-03621-4
Keywords
This article is cited by
-
Decentralized adaptive task allocation for dynamic multi-agent systems
Scientific Reports (2025)
-
Evolutionary-based hybrid classical-fuzzy heuristic protocol for real-time data availability of IoHT devices in cloud-fog computing
Cluster Computing (2025)









