Introduction

Mosquito-borne diseases constitute a significant focal point of health apprehension in various nations, encompassing those developed and developing. The European Centre for Disease Prevention and Control documented a pinnacle in vector-borne malady occurrences due to mosquitoes in 20201. As a consequence of this augmented incidence, nations are directing escalated efforts towards the control of mosquito activity in public settings. Several conventional techniques have been employed for mosquito breeding regulation, comprising environmental manipulation2, chemical-based interventions3, and biological strategies4. Products of moquito lure-and-catch mechanisms are also widely avaialble5,6,7 using ultraviolet light, pheromones, and electical zappers or sticky traps. The majority of such methodologies are labour-intensive, repetitive, and engender ecological contamination. A prospective avenue to transcend these monotonous strategies involves the mechanization of processes in an ecologically benign manner. Recent developments in the field of robotics have made this mechanism attainable.

Mosquito detection in current state-of-the-art technologies is limited. Imaging detection methods are rapidly growing8,9, but are limited to the quality of images. While Motion blur, background clutter, and lighting conditions play an important factor in inference, it often requires mosquitoes to be immobilized or trapped. At present, it is extremely challenging to capture flying mosquitoes without a high-speed camera and without a prior informed location of the mosquitoes. Another interesting method for detection is via wingbeat sound detections10 developed by Yin et al. This method is also challenging to be deployed in human-dense areas with noise generated by people and background noises. The presence of mosquitoes is well known to be predictable by environmental factors such as the presence of human11 or natural habitat, humidity12, temperature13, and time of day. Deploying mosquito control robots based on known risk factors is significantly more efficient and effective than relying on mosquito-seeking methods.

In the prior work14, a mobile robot was developed for mosquito trapping; however, it lacked an optimized, human-aware path planning strategy during mosquito capturing. Equipped with bait modules to attract mosquitoes and a trapping mechanism to ensnare them, the robot aimed to classify trapped mosquitoes, particularly targeting the Aedes Aegypti species responsible for dengue transmission. The study centred on the robot’s navigation strategy, employing manually designated waypoints to ensure comprehensive coverage of the area. However, despite this meticulous approach, the robot encountered limitations in capturing mosquitoes at these waypoints due to minimal mosquito activity in those areas. As highlighted in a WHO article15, mosquitoes tend to exhibit heightened activity in regions with high human density, as these areas offer abundant feeding opportunities. Consequently, it becomes imperative for the robot’s coverage path to align with areas of elevated human activity to optimize mosquito capture rates and mitigate disease spread. Given the dynamic nature of human density across different locales, it is essential for the robot to autonomously track and incorporate human density information into its coverage path planning process, obviating the need for manual intervention. Crafting such a path presents a multifaceted challenge, necessitating a nuanced balance between various considerations, including overall area coverage, re-covering previously visited areas, and maximizing mosquito capture rates relative to human activity levels. One promising avenue to address this challenge involves employing a complete coverage path planning (CCPP) technique, capable of dynamically adjusting the robot’s path to accommodate fluctuations in human density across the environment.

The present study introduces a pioneering approach termed the Human-First Approach Complete Coverage Path Planning (HFA-CCPP) scheme, designed specifically for a mosquito-capturing robot operating in environments with high human activity levels. The proposed methodology involves striking a balance between prioritizing areas of human activity and ensuring comprehensive coverage, thereby optimizing the robot’s ability to capture mosquitoes while effectively completing the coverage task. This paper covers the system architecture of the mosquito-capturing robot, delineates the translation of theoretical formulations into practical implementation of the HFA-CCPP scheme, presents simulation results, and validates the approach through real-world experiments. Furthermore, the following points outline the primary contributions of this research article.

  • A novel human activity-based complete coverage path planning for a mosquito-capturing robot is proposed, human-first approach CCPP (HFA-CCPP).

  • A modified Glasius bio-inspired neural network has been developed to perform human-dense area coverage for improved mosquito capturing efficiency.

  • Development of an autonomous mobile robot with the mosquito bait and catching mechanism.

  • Validation of the proposed CCPP scheme on the developed physical robotic platform in a real-world scenario.

Therefore, this article presents a novel mosquito-capturing robot to achieve an autonomous maintenance system for the pest care domain. Note that this paper does not aim to provide a mosquito-seeking algorithm but to deploy the robot in a way where it can prevent people from mosquito harassment if one is present using the presence of a lure-and-catch mechanism onboard a robot.

The structure of the paper is as follows. Section 2 reviews the recent robot development carried out on pest care and vector control, then discusses the recent research on CCPP based on GBNN. Section 3 describes a mosquito-catching robot and its mechanisms. The following Sect. 4 introduces the robotic system and describes the proposed CCPP scheme. Section 5 provides the simulated result of the proposed scheme in a controlled setting, and 6 describes the experimental design, setup, results, and related discussions on the performance of the proposed solution on the robot in various real-world scenarios. Section 7 summarises the paper and discusses future works.

Related work

There are several approaches used for path planners. The types of path planners control are namely for shortest path16,17, area coverage18,19, smoothness path20, safe path 21,22, rescure. These problems formulated path planners can be modeled with several methods such as using artificial potential field23,24, bio-inspired25,26, nature-inspired27,28 or classical search algorithms29. Each type of path planner is suitable for different types of contextual applications, and each model has a specific disadvantage.

Amongst the path planners, coverage planners are the most appropriate path planners. It grants the robot the ability to perform full coverage in most of the cases. The most typical path planners used are spiral30 or zig-zag31, reinforcement learning 32,33, evolutionary 34,genetic 35, vision 36 algorithms. Artificial potential offers a simple model with known deadlocks highlighted in some developers37,38. With the rise of neural networks, bio-inspiration has been used for path planning. Neural network solutions often package themselves with training requirements before usage, which can be expensive to deploy in businesses or with a lack of technical support. Glasius bio-inspired neural network (GBNN)39 and its modified version 40 demonstrate a rule-based neural network that does not require training but, like most path planners, requires a map to operate. There are similar state-of-the-art solutions, such as \(\varepsilon ^{*}\)41 algorithm and predator-prey coverage path planner (PPCPP)42. These path planners require additional models to complement the primary model when it encounters a natural deadlock. GBNN does not have a natural deadlock, and the computational complexity is at the peak of the state-of-the-art. With a suitable optimization model, it can be used to enhance the algorithm further for practical use. A complete coverage path planner (CCPP) is not new to the robotics community. There have been numerous other CCPP techniques proposed in the literature for various robotic applications, including cleaning40, inspection43, agriculture44, and so on. Also, there are studies in the literature that considered the tradeoff models on CCPP. For instance, Katharin et al proposed an energy cost-based Voronoi-based path generation (VPG) algorithm for a multipurpose UAV robot45. In another work, a CCPP-based autonomous radiation mapping scheme has been proposed for a mobile that trades the total area covered with its coverage cost46 and energy efficiency 47. Van-Thach and Quang-Cuong proposed a new approach to establish the CCPP with minimal overlapping on 3D surfaces using mesh models48. Even though there are numerous novel techniques proposed in the literature, none of the existing approaches focuses on performing CCPP based on human activity. To the best of our knowledge, the previous approaches have minimal connection to pest care robots and no relation towards autonomous mosquito-capturing robots.

A limited number of path planning approaches are used for pest-control49,50. The main development of pest control revolved around crop management. There are no path-planning robots for pest control to the best of our knowledge. Path planning is essential for the robot to draw the catching mechanism closer to humans. When lures perform best at close proximity, such as carbon dioxide51, ultra-violet lamps52 and pheromones53 in early research works. Sustained position might increase efficacy in allowing pheromones to draw more mosquitoes54. The proposed lure-and-catch mechanism will serve as a distracting element in people in a targeted area.

Hence, the current development in mosquito-catching automation presents a gap in development. Automating mosquito catching solution plays a critical role as it does not require environmental installations of ultraviolet light and scent attractants. More importantly, it provides semantic control for adequate use of equipment in a human environment.

Mosquito-capturing dragon fly robot system architecture

Mechanical design and system architecture

The section focuses on the mechanical and electrical aspects of the developed mosquito-capturing robot, Dragonfly. Dragonfly employs a differential drive mechanism that is designed to navigate challenging semi-outdoor environments, propelled by two robust 24V BLDC motors seen in Fig.1. The mechanical structure is built for resilience, integrating aluminum and 3D-printed parts that draw inspiration from the iconic character Wall-E. Notably, the robot’s dimensions are \(600~mm\times 573~mm \times 800~mm\) (length \(\times\) width \(\times\) height), ensuring a compact yet sturdy form factor. The locomotion system is enhanced with a front caster wheel that provides a three-point configuration, bolstering stability during navigation.

The robot’s electrical architecture encompasses an array of processing components and sensing peripherals. At the heart of this system is a formidable industrial PC equipped with an i7 processor, 512 GB SSD memory, and 16 GB RAM. This processing powerhouse serves as the central hub for collecting and processing data from a multitude of sensors, driving crucial decision-making processes. Microcontroller units play pivotal roles in data collection, communication, and safety. The intricate dance of data orchestration is supported by a mix of sensors, including a 2D lidar (Sick Tim 581) that facilitates navigation through SLAM and AMCL algorithms, a depth camera (Intel RealSense D415i) for rich 3D environmental perception, and an inertial sensor (VectorNav IMU) that contributes to global orientation awareness, coupled with encoder information for dead reckoning. Additional sensors such as cliff detection, ultrasonic, and bump switch sensors furnish the robot with a comprehensive understanding of its surroundings.

Powering this intricate symphony is a robust 24V 10AH capacity battery, supplemented by strategic step-down voltage regulators. With full charge, the robot can operate for 8 to 10 hours in patrol mode. It is also reconfigurable with a hat attachment to allow it to operate in drizzling consider. This comprehensive mechanical and electrical design enables the robot to autonomously navigate and fulfill its mission of mosquito capturing with precision and resilience.

Mosquito trapping design

A sophisticated mosquito-trapping unit is at the heart of the \(Dragonfly's\) mission to capture mosquitoes. This component resides within the robot’s neck and embodies two core operations: mosquito attraction and capturing. The intricately designed unit employs a combination of stimuli to lure mosquitoes. Ultraviolet (UV) light, with a wavelength of 368 nm for public safety, and olfactory cues derived from a blend of octanol and lactic acid, simulating human sweat, create a compelling allure for mosquitoes, drawing them toward the robot.

Fig. 1
figure 1

Dragonfly is made up of 3 key component, the head, mosquito-catching mechanism, movebase. The catching mechanism emits ultraviolet light and pheromones from the upper and lower chambers, respectively. The lower chamber also contains the sticky pad as the final stage of the mosquito-catching process. Aedes and Culex mosquito breeds are identified in the sticky pad sample shown.

The trapping process is a carefully orchestrated interplay of components, as seen in Fig.1. There are 3 lures used for attraction: ultraviolet light, formulated pheromones, and strip patterns on the catching mechanism. The mosquito is able to enter the upper chamber, which contains the UV light source. In the upper chamber, a high-speed suction fan, rotating at 1000 RPM, generates a controlled airflow that pulls mosquitoes downward into the lower chamber of the catching mechanism. Within this lower chamber, a yellow sticky tape serves as an effective adhesive surface, which completes the capturing process. The hat helps to provide adequate shelter if the robot is expected to operate in the rain. This captured mosquito collection method not only ensures efficacy but also provides a tangible resource for future studies and analysis.

To facilitate the functioning of this unit, a microcontroller serves as the conductor, orchestrating the operation of the UV light, olfactory cues, and suction fan. The unit draws power from the same robust 24V battery that fuels the robot’s various functions. This meticulously engineered mosquito trapping unit positions the Dragonfly as a powerful tool in the pursuit of mosquito population management.

Human-first complete coverage path planning scheme

For a mosquito-catching robot deployment, the proposed solution will be described with detailed elaboration in this section. The structure of this section first introduces the proposed method with an overview in 4.1, then details the operating concepts that enable complete coverage in 4.3 and added social behavior in 4.4. Finally, in 4.5 describes the combined effect and integration of the two layers.

Preliminaries

In Fig. 2, the robot predominantly requires 3 inputs for the solution to be executed. A map of the working environment(M), the location of the robot location (L), and the identified area where people are interacting the most “Human-Dense Area” (HDA.) Mapping55,56, localization57,58, and human identification59,60 software solutions are widely established in numerous development. Alternatively, classification training and mapping can be operationally expensive, and manual production of this information may be recommended. From a typical navigation stack, the localized position of the robot will be used as the starting point of the algorithm, as most path planners do. Unlike point-to-point path planners, goal poses are not required.

The proposed work assumes that the users of the robot deployed with the proposed solution that deploys known information of human-dense areas. While it can be performed with human-in-the-loop, the identification can be automated by camera or other technology as well. This can be determined by visual observations by users or machine learning-trained policies. However, the purpose of work does aim to enable robots to roam areas with more people but does not claim that it necessarily means more mosquitoes. Rather, the solution aims to protect human-populated areas from mosquitoes, if any. The scope and aim of this paper is not to predict or hunt for mosquito-rich areas but to attend to areas where humans frequent. The key assumptions of our paper are as follows:

  1. 1.

    human-in-the-loop deployment or AI-assisted technologies for determining risk boundaries. The work assumes that the expert or technology-informed input is reliable and accurate,

  2. 2.

    mosquito-harassment risk only arises when it is a mosquito and human present areas. The risk is reduced in areas where people are less frequent.

Fig. 2
figure 2

(a) Required inputs: map of operating space (M), labeled human-dense area (HDA), and robot location, (b) down sampling the digital map into a pair of neural networks, (c) pair of neural networks, one for social attraction and the other for complete coverage, (d) produced array containing the list of step positions for complete coverage. Note: All parts only display examples for visualization of the solution concept. The photo of Dragonfly is shown on the right.

Map resolution

The process path computation will be processed using the provided information. The map will be downsampled and translated into two versions of neural networks: the coverage and the social layer. The downsampling process aims to reduce the resolution of the map as the coverage of mosquito catching does not require the robot to cover every inch of the area. Moreover, coverage for spatially precise coverage is computationally expensive. For all the nodes combined during the reduction, any number of occupied grids found will be considered occupied in the final grid. The optimal threshold of the resolution sensitivity to the occupancy will not be part of the scope of this paper. The HDA will translate into bounds of simulation of human activities moving around the same space. Areas outside the indicated bound can also include simulated human activity but with a significantly smaller number (i.e., inside HDA: 10, outside HDA: 2). These numbers can be set as the parameters for area bias that models the actual scenario at deployment.

Coverage layer

The coverage layer aims to provide complete coverage for the intended operating space. The algorithm is modeled from Glasius Bioinspired Neural Network (GBNN) found in several developmental studies 61,62. GBNN remains a useful algorithm for guaranteed complete area coverage. The main equation of the algorithm is as follows:

$$\begin{aligned} N_{i(t+1)} = f \left( \sum \limits _{j=1}^{m}w_{ij}N_k^+ + \alpha _{i} \right) \end{aligned}$$
(1)

where \(N_{i}\) is the ith neuron on the neural network, k is the kth neighbor of the indexed neuron, w is the weight of the specific neighbor of a neuron and, lastly, \(\alpha\) is the neural activity level associated with the state of the corresponding point in the map, as seen in (2).

The neural network models the state of the discrete point on the map. The states of the points numerically labeled ranged from -1 to 1 in neural activity levels. The equation is as follows:

$$\begin{aligned} \alpha _{i} = {\left\{ \begin{array}{ll} + E\, \,,\,i \,=\, \text {Uncovered} \\ - E\, \,,\,i \,=\, \text {Obstacle}\\ \,\,\,\,\,0 \, \,\,\,,\, i \,=\, \text {Covered} \\ \end{array}\right. } \end{aligned}$$
(2)

where the \(\alpha\) is the neural activity level associated with the state of the corresponding point in the map, i is the ith neuron of the neural network, and E is a large positive constant in order for that occupied areas maintain as a negative value during path planning.

The weight of each neuron determines the network’s sensitivity from (1). The derivation of the weight is computed from (3) as follows:

$$\begin{aligned} w_{ij} = {\left\{ \begin{array}{ll} \exp ({-\beta (i-j)^2})\,,\, 0 < i - j \le R \\ 0 \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,, \,i - j > R\\ \end{array}\right. } \end{aligned}$$
(3)

where w is the weight of the specific neighbor of a neuron on the neural network, i neuron, j is the jth neighbor of the ith neuron. The variable \(\beta\) is a positive constant.

Then, finally, the value of each neuron of the neural network will be redefined using (4) after computing (1). The equation is presented as follows:

$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} -1 \,\, \, \,\,\,\,\,\,\,\,\,\,\,, x< 0 \\ \epsilon x\,\, \,\,\,\,\,\, \,\,\,\,\,\,\,, 0 \le x < 1\,,\, \text {where}\,\, \epsilon> 0\\ 1 \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,,\, x > 1\\ \end{array}\right. } \end{aligned}$$
(4)

where x is the computed value from (1), and \(\epsilon\) is a small positive constant.

This process iterates in each step update. Hence, the neural activity map in this network changes as the robot plans its move on the corresponding map.

Social layer

In the same manner, the social layer operates similarly. However, the interest of this layer is different. Instead of labeling covered and uncovered spaces, it labels the neighbors of simulated people walking inside and outside the HDA.

Human activity is simulated with a random walk algorithm 63. Each human label is bound to move either inside or outside the HDA. Human location is denoted by the value \(-2\) in this layer. The space around the human location will be a mosquito risk zone. The risk zone is the interest of coverage for the robot, denoted with 1. The model of the state of the corresponding point on the map varies from the description in 4.3, (2). The remodelled Eqs. (2) and (4), are as follow:

$$\begin{aligned} \alpha _{i} = {\left\{ \begin{array}{ll} + E\, \,,\,i \,=\, \text {Potential Risk of Mosquito's bite space} \\ - E\, \,,\,i \,=\, \text {Prohibited}\\ \,\,\,\,\,0 \, \,\,\,,\, i \,=\, \text {Mitigated Risk space} \\ \end{array}\right. } \end{aligned}$$
(5)

where the \(\alpha\) is the neural activity level associated with the state of the corresponding point in the map, i is the ith neuron of the neural network, and E is a large positive constant in order for that occupied areas maintain as a negative value during path planning.

$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} -2 \,\, \,\,\,\,\,\,\,\,\,\,\,,\, x = -2\\ -1 \,\, \, \,\,\,\,\,\,\,\,\,\,, -2< x< 0 \\ \epsilon x\,\, \,\,\,\,\,\, \,\,\,\,\,\,\,, 0 \le x < 1\,,\, \text {where}\,\, \epsilon> 0\\ 1 \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,,\, x >= 1\\ \end{array}\right. } \end{aligned}$$
(6)

where x is the computed value from (1), and \(\epsilon\) is a small positive constant.

As the robot approaches the human position, as seen in Fig. 3, the area of effect (AOE) affects the neural activity of the robot. The closer the robot’s proximity to the simulated human position, the lower the risk of mosquito attacks. When the robot is on a neighbor node from a simulated human, the risk potential of the simulation is suppressed to 0.

Fig. 3
figure 3

1 - Area of effect (AOE) can be described by the radial length, \(\phi\), of the grid with respect to robot position (computing waypoint), 2 - As the robot approach closer to the human simulation, the risk potential (residual risk, \(r^{*}_r\)) within the AOE boundary reduces, 3 - If the robot is next to the human pose, the risk potential (carrying risk, \(r^{*}_c\)) surrounding the specific human pose reduces to 0. Interest targets refer to human surround (neighboring neurons around a simulated human position.

This effect, seen in Fig. 3, is modelled using two equations, namely the AOE and risk mitigation \(r^{*}\) as follows:

$$\begin{aligned} r^{*}_r= & {\left\{ \begin{array}{ll} max(d\times f(x),0)/ \phi ,\, d < \phi \\ 1 ~~~~~~~~~~~~~~~~~~~~~,\, d >= \phi \\ \end{array}\right. } \end{aligned}$$
(7)
$$\begin{aligned} r^{*}_c= & {\left\{ \begin{array}{ll} max(d-1,0)/ \phi \,\,\,\,,\, d < \phi \\ 1 ~~~~~~~~~~~~~~~~~~~~~,\, d >= \phi \\ \end{array}\right. } \end{aligned}$$
(8)

where, d refers to the Manhattan distance of the spatial position or human position relative to the robot, \(\phi\) refers to the radius of the area of effect, and f(x) refers to the function in (4). \(r^{*}\) is a generalized risk variable that can be further divided into two risk parameters, namely, residual risk(\(r^{*}_r\)) and carrying risk(\(r^{*}_c\)).

The residual risk is position-specific, where a simulation of human positioning can influence it. The risk will not decrease unless it is acted upon by the robot AOE as described in (8). The carrying risk is the maximum risk potential that can be influenced by a simulation of human per neighbor position.

When a simulated human position shifts to another position, the risk potential updates according to the new set of neighbors from (6). But when the previous set of neighbor nodes is no longer a neighbor, the risk potentials reduce modeled using (7) and (8) put into (9):

$$\begin{aligned} N_{i(t+1)}= f(x)\times r^{*}_r\times r^{*}_c \end{aligned}$$
(9)

where \(N_{i}\) is the ith neuron on the neural network. \(r^{*}_c\) and \(r^{*}_r\) is the carrying risk and residual risk, respectively.

Waypoint generation via combined layer

With both neural network models, the networks can be simultaneously used to determine the waypoints of the path plans. In each iteration, a new waypoint is determined in each iteration of the algorithm as seen:

$$\begin{aligned} wp_{i+1} = max(N_{k\mu }+N_{k\nu }) \end{aligned}$$
(10)

where \(wp_{i+1}\) denotes the next waypoint, and where max(x) returns the neighbor position with the highest neural activity from its corresponding neuron in both networks. Let the social graph be denoted as \(\mu\) and the coverage graph as \(\nu\).

Fig. 4
figure 4

1 - Social layer (upper layer, \(\mu\)) and coverage layer (lower layer, \(\nu\)) can be stacked to represent the same space on the actual map (M), 2 - both corresponding network neurons will react corresponding to the path plan position as described in (1) - (10), 3 - The social layer will result in complete open space neurons with no interest targets. Interest targets refer to human surround (neighboring neurons around a simulated human position for the social layer. For the uncovered layer, the open space refers to covered areas, while interest targets refer to uncovered areas.

At each point in time, the robot is assigned to a neuron that corresponds to its spatial position from its latest waypoint position. Both \(\mu\) and \(\nu\) will be computed to the equations described in 4.3 and 4.4, as seen in Fig. 4. The algorithm flow is shown in Algorithm 1.

Fig. a
figure a

HFA-CCPP AlgorithmInput

From Algorithm 1, E, \(\alpha\), \(\beta\), R, \(\epsilon\) are coefficients used in GBNN. The variables d, \(\phi\), \(r^{*}_r\) and \(r^{*}_c\) are variables used in social layer. \(N_k\) is a list containing the neighbors for the selection of the next waypoint. The next waypoint can be computed using (10) as seen in line 8 of the algorithm. The map, M, can be updated represented by \(\mu\) and \(\nu\) neuron network, along with \(r^{*}_r\) and \(r^{*}_c\). The algorithm will continuously iterate until all opened grids are covered(\(\nu \notin\) R+). Opened grids refer to the neurons with neutral or positive values.

Simulation

The proposed method is validated with simulations in path plan generations. The simulation setup and results are discussed in this section. The aim of the proposed algorithm is to show evidence of a human-first approach and relative insights into its applications.

System set-up

The simulation is tested on a Python script running on Sypder 4.1.5, a scientific Python Development Environment. The software is operated on a Macbook Pro, M1 chip with 16 GB RAM. The simulation experiment is iterated by two-fold. The operations are run twice in the same initial conditions (same simulated environment and initial positions of the robot and simulated humans). This process is repeated 1000 times for quantification and repeatability. In each iteration, the running time taken to achieve 1 - human-first behavior, 2 - complete coverage, and lastly, 3 - number of waypoints generated in the path plan.

The running time for human-first behavior is qualified in the instance when all simulated humans surrounding risk potential are reduced to 0 (\(\alpha _i = +E \notin \mu\)). The complete coverage is qualified in the instance when all uncovered areas are converted to covered areas (where \(N_{i} = 1 \notin \nu\)). The path plan generates a waypoint from one node to another, and the total number of waypoints is logged during the simulations.

Fig. 5
figure 5

Various simulated environments used. The provided environments of ( A), ( B), and (C) are provided to demonstrate repeatability of the HFA-CCPP performance against GBNN.

In Fig. 5, environments A, B, and C are used to evaluate the performance of the path planner. The simulation will be used with 3 simulated human in the top-left quadrant of the graph and 1 simulated human among other quadrants. In the neural network variables, E, r, \(\beta\), \(\epsilon\) are 100, 2, 2, and 0.7, respectively.

Simulation results

Fig. 6
figure 6

Simulation results for human-first behaviors and complete coverage performances shown in time, and human-first to complete coverage proportionality by percent. The comparison is between the HFA-CCPP and GBNN algorithm.

In Fig. 6, GBNN and HFA-CCPP have taken 3.22 and 1.15 seconds for completing human-first behavior, and 4.85 and 5.52 seconds for complete coverage on average, respectively. The average percentage of time taken for human-first compared to complete coverage is 66.38% and 20.68%.

Fig. 7
figure 7

Simulation results for human-first behaviors and complete coverage performances shown in time, and human-first to complete coverage proportionality by percent. The comparison is between the HFA-CPP and GBNN algorithms.

In Fig. 7, GBNN and HFA-CCPP have taken 4.00 and 1.88 seconds for completing human-first behavior, 6.43 and 7.00 seconds for complete coverage on average, respectively. The average percentage of time taken for human-first behavior compared to complete coverage is 62.77% and 26.44%.

Fig. 8
figure 8

Simulation results for human-first behaviors and complete coverage performances shown in time, and human-first to complete coverage proportionality by percent. The comparison is between the HFA-CPP and GBNN algorithms.

In Fig. 8, GBNN and HFA-CCPP have taken 5.20 and 1.17 seconds for completing human-first behavior, 8.15 and 8.64 seconds for complete coverage on average, respectively. The average percentage of time taken for human-first behavior compared to complete coverage is 63.92% and 13.62%.

Discussion

In Figs. 6, 7 and 8, the HFA-CCPP has shown that, in most cases, it is quicker than GBNN but is also mostly slower in terms of complete coverage. In all simulations, both GBNN and HFA-CCPP have successfully provided complete coverage in all cases.

Table 1 Summary Statistics.
Table 2 Summary Statistics.
Table 3 Summary Statistics.

Statistically, via Z-Test, the human-first and complete coverage performances exhibited by HFA-CCPP are not the same as GBNN at 99% confidence in all simulations. There are some vivid observations in the standard deviations in the human-first approach compared against GBNN across all simulated environments from Tables 1, 2 and 3. HFA-CCPP has a smaller standard deviation in human-first approach performance, while GBNN has a smaller standard deviation in complete coverage. One exception in this observation is from the second test environment, where the data set had experienced several data spikes in the human-first approach with HFA-CCPP. In this exception, HFA-CCPP is seen with a high percentage of the time for complete coverage used for approaching intended human-dense zones.

Fig. 9
figure 9

A 3-dimensional plot of the neural network of the combined layer in (a) and its illustrated numerical gradient in (b).

The numerical gradient of the combined neural network layer determines the behavior of the path plan. Fig. 9 shows the gradient of neural network values for enabling complete coverage behavior and a human-first approach. Each waypoint is determined by its previous position and its surrounding neuron values. These values are derived solely from Eqn. 1. From between iterations, the neuron network always selects the neighboring neuron with the highest neural activity. The gradient of values across the neural network is attributed to covered, uncovered, and obstacle representations on each neuron. In a case where all possible neighbours shares the same magnitude, the selected next waypoint is based on the search-based bias. The emergent behavior forms completed coverage with priority human-dense areas.

It is important to note that GBNN is mostly deterministic in its step performance as compared to HFA-CCPP. HFA-CCPP is largely probabilistic due to the randomly walking human simulation. In some cases, the human simulation is also an obstacle for the planner on the \(\mu\) layer and affects the neural activity propagation. Both planners’ performances vary across different environments, as the obstacles in \(\nu\) layer affect the propagation in neural activity as well. The elongated path from the top-left-hand corner of the simulated environment - B, may have made neural activity influenced the path planner to complete slower than expected on the occasions that the HFA-CCPP performed badly in the human-first approach. The HFA-CCP has the smallest standard deviation in an open space for the human-first approach, which vividly demonstrates the effects of the environment.

Recommendations

In practice, a shorter amount of computation time is preferred; however, the bias of the path plan is required towards human-activity areas to ensure sensible implementation of the use case. Complete coverage would be achieved with purposeless if the bias is not ensured. The time lost thus becomes insignificant as it is approximately 1 to 2 seconds on average. This can be improved by reducing the resolution of the coverage map and avoiding implementation for coverage in long and narrow environments, as discussed.

From the simulations, it takes GBNN up to 8.15 seconds for complete coverage based on its intended deployment method. The proposed human-first approach takes up to 1.88 seconds maximumly. The approach can be computationally divided into humanfirstcoverage and remaining coverage phases in execution to maintain quick responsiveness within 1 to 2 seconds. The remaining coverage will not be iterative updated until human first coverage is completed. Onboard computational resources can be reduced real-time deployment.

Hence, for optimal application of the proposed algorithm, it has to be deployed in an area with the absence of an isolated passageway. In most scenarios, the proposed method would perform better than a typical GBNN despite its dependency on the applied environment. The duration of approaching human-first bias is shown by achieving approximately 3 times in reduction compared to the duration for complete coverage.

Real-world experiment

The proposed method is demonstrated with experimental studies by path planner capabilities in real life and discusses implications involved with the integration of the proposed path planner. The aim of this section is to validate its purpose in real-life implementation for complete coverage with the human-first approach.

System set-up

The system is implemented on a mosquito-catching robot platform, Dragonfly, as introduced in Sect. 3.1 - 3.2. The platform operates on Robot Operating System (ROS, melodic) navigation stack with Oriental motor control.

Fig. 10
figure 10

(a) Virtual representation of the map space and neuron states, (b) corresponding experiment space, (c) Example of Dragonfly operating in the experiment space.

The experimental test is conducted 3 times per algorithm. Two algorithms will be used, with HFA-CCPP and GBNN. The test environment is shown in the given map shown in Fig. 10. The map is generated by the GMapping algorithm found on the Robot Operating System (ROS). The map is post-processed to reduce its resolution while preserving its occupancy. The post-processing logic simply takes a correspondingly larger area (i.e., resolution changed to 0.5 meters per pixel). The robot system operates in a 3-level navigation stack, task planning, global path planning, and local path planning. The proposed algorithm will operate at the task planning level; conventional global and local path planners from ROS are used in the navigation stack, namely the Astar algorithm and Dynamic Window Approach, respectively.

In all execution of each test, the robot will be initialized at a fixed position for fairness in each test. The HDA is manually determined and provided to the robot. The experiments are also operated in the evening period in Singapore, 1700hrs to 2000 hrs SGT, where most mosquitoes are found in this facility. Additionally, a fresh sticky pad will be inserted into the lower chamber of the lure-and-catch mechanism at the start of each method experimentation and removed when the experiment iteration cycle ends.

The metric used to evaluate the performances of the state-of-the-art and the proposed algorithm is defined in Table 4.

Table 4 Performance Metric Definition Table.

Results

Fig. 11
figure 11

Scatter plot of 3 rounds of execution with the pose of opened spaced as reference for HFA-CPP way-keeping and docking performance. The orange circle represents way-point keeping tolerance while the red circle represent docking tolerance at specified location.

The results of a complete coverage are shown in Fig. 11 and its accuracy Fig.12. The set path-keeping accuracy is to be within 0.4 meters for accuracy and 0.2 meters for docking accuracy. The robot had kept its path and docking to satisfactory levels. The path plan is deterministic, as the path plan is in a fixed environment, and the initial position is fixed. The running time for complete coverage of the 3 executions by HFA-CCPP is 11 minutes and 59.735 seconds, 17 minutes and 58.124 seconds, and 15 minutes and 52.978 seconds. The respective average time per step is 18.455 seconds/step, 17.969 seconds/step, and 19.060 seconds/step. The path planner took 1 minute and 29.089 seconds (5 stpng), 1 minute and 20.083 seconds (6 stpng), and 1 minute 29.089 seconds (5 stpng) to clear out the mosquito bite risk potential in its social layer.

Fig. 12
figure 12

Error plot of 3 rounds of execution with the pose of opened spaced as reference for HFA-CPP way-keeping and docking performance. The upper band limit represents way-keeping tolerance, while the lower band limit represents docking tolerance.

Discussion

From the experiment, the results have shown that the path planner is capable of complete coverage in both path planners, human-first approach by showing highly deterministic behavior of reducing risk potential from human-dense areas by moving towards specific areas. The area of the experiment zone is reduced as it takes a long time for complete coverage. The reduced area is sufficient to provide clarity and high repeatability for proof-of-concept.

In the experiments, the robot has shown less significance between the two algorithms when it comes to complete coverage performances, while the human-first performance fo HFA-CCPP maintains largely significance comparing 1 minute 29 seconds (worst case in HFA-CCPP) to 1 minute 36 seconds (best case in GBNN). This is approximately 32.2% quicker than traditional GBNN. The variations of robot running time have shown that the running time of the algorithm plays a less significant role in the physical demand. The human-first approach would benefit the human-dense area as it should remain the key priority in most practical cases.

In a large space, the robot might not have sufficient battery power to enable the robot to fulfil its role in complete coverage. The impact of such robots would reduced if it does not provide coverage in human-dense areas. Applicable scenarios can be found in areas such as a public park area or a complicated facility with the risk of mosquito threats. HFA-CCPP would benefit the application more than just a complete coverage algorithm. This benefit applies to any existing complete coverage algorithm.

For the interest of understanding the proposed algorithm with the navigation stack, the lost recovery behavior of the ROS navigation stack will be triggered if localization is poor. There is no estimation of human-dense area or human activity performed by the proposed work. The human-dense areas can be provided manually by the user, while the moving humans are simulated during path planning. The reason for such implementation is to provide a natural weaving path plan in human-dense areas while not needing to rely on live-human detection, which is disruptive in motion execution, computationally intensive and reduces the actual quality of localization. Hence, tracking live humans is avoided to negate implementation, making it easier to adopt while preserving the proposed algorithm purpose for human-first approach path planning.

While strategic location improves the capture rate, there are other factors that affect capture rates, such as weather and time of the day. Mosquito-catching success rate depends on the presence of mosquitoes. However, the aim of the proposed work is to validate that the algorithm does not hinder or limit the lure-and-catch mechanism’s ability to capture mosquitoes. The captured mosquito reflects the protection provided by the robot during our experiment. The proposed framework does not adopt mosquito-seeking capabilities but to lure-and-catch it if one is available. Despite the availability of a lure-and-catch mechanism, the study presents limitations in proper entomological controls for conclusive biological effects of the mosquito capturing rate. The scope of this study is unable to extend to the factors of capturing success rate.

Lastly, in the deploy spaces, the experiment do not have to considered the actual presence of humans. This allows the proposed path planner to be deployed without complex visual classification of humans and pose predictions being active. Without active these functions to be constantly activated, computational resources and energy consumption is reduced for the robot service to be deployed sensibly. Live human presence can be trivialized by having simulated human activities in the designated HDA by the deployer. The aim of the algorithm is to reduce mosquito population in human populated areas instead of actual chasing or tailing a human movement.

The Dragonfly has demonstrated deterministic ability in prioritizing human-dense area first in contrast to the original state-of-the-art GBNN with probabilistic clearance of the human-dense area. In other practical considerations, such as deploying in a large area for maximum return of interest of robot owners, the human-dense area must first be covered before robot energy is fully drained in the large compound.

Conclusion

This paper presents a novel complete coverage path planning scheme (HFA-CCPP) leveraging on a modified Glasius bio-inspired neural network algorithm (GBNN) for a mosquito-capturing robot named Dragonfly that can generate a coverage path closer to a crowded area. The proposed method had an extra social layer on top of the existing GBNN to simulate social behaviours between the human activity zone and the robot. This paper also introduced the mechanical and system architecture of the developed mosquito-capturing robotic platform. The proposed algorithm has been systematically benchmarked with the traditional GBNN algorithm in both simulated and real-world environments. The proposed algorithm performs about 3 times better in terms of bringing the robot closer to humans as a priority, which is critical since the mosquitoes tend to be closer to human-dense areas. Even though the proposed method is slower by 1 second in terms of performing the complete coverage, it can perform 2.5 times faster in covering the crowded area. Such a behaviour will help in capturing more mosquitoes in the real-world operation, which is also validated in our real-time experiments. The methods can be extended to other applications, such as drinks-serving waiter robots and event or facility management, where humans are of key interest to the specified tasks. In both simulations of the algorithm running time and experimented robot running time, HFA-CCPP has demonstrated its priority to the human-dense environment and complete coverage as its second priority.

Future work

There are several potential directions for future development of the HFA-CCPP. First, it can be extended to validate across a variety of application settings beyond mosquito control. Second, the human-first approach could be enhanced by incorporating a dynamic model in which the human-dense area evolves over time, supported by live human tracking for real-time local planning. Finally, integrating lightweight behavioral proxies, such as ambient CO\(_2\) concentration levels to align the coverage strategy closely with entomological modelling, thereby enhancing ecological realism and bridging sim2real gap from current standpoint.