Introduction

The Red Palm Weevil (RPW), Rhynchophorus Ferrugineus, is a very destructive insect that silently infects palm trees by tunnelling into their trunks, inflicting severe internal damage frequently not detectable until it is too late. Conventional methods of detection such as visual observation, pheromone traps, and acoustic sensors are either delayed or are not accurate enough, and thus early intervention becomes challenging. This proposed work is based on the hypothesis that thermal image anomalies specifically subtle heat variations produced by internal RPW metabolic activity can be used to accurately detect early stage infestations before external symptoms appear. In order to counter these limitations, this project proposes an automated detection system that utilises thermal imaging coupled with a Convolutional Neural Network (CNN) and sophisticated image feature extraction techniques1.

Fig. 1
figure 1

a RPW (Adult), b Palm tree vulnerable to RPW infestation, c Severe Internal Damage and decay caused by RPW larval activity.

Compared to existing RPW detection approaches that rely on acoustic sensors, pheromone traps, RGB images, or standalone machine learning models, the proposed framework introduces a distinct contribution by integrating thermal imaging with Convolutional Neural Network(CNN) based analysis in an IoT enabled environment. Figure 1a, b and c show the infested coconut palm tree affected by the Red Palm Weevil. The thermal images are preprocessed and examined to obtain features like entropy, edge density, contrast, correlation, texture descriptors like Gray-level Co-occurance Matrix(GLCM) and Local Binary Pattern (LBP), Hu and Haralick moments, Histogram of Gradient (HOG) features, and thermal indicators such as hotspot patterns and mean intensity. These are combined with a CNN to effectively classify palm trees as RPW infested or not.A custom CNN was chosen over lightweight models such as MobileNet and EfficientNet because thermal images contain low texture patterns, and the shallower CNN demonstrated higher accuracy and lower computational cost, making it more suitable for Raspberry Pi based IoT deployment. The system is embedded in a Flask-based web platform, through which users can upload thermal images, view processed outcomes visualized, and receive real-time predictions. The early detection method not only improves the accuracy of pest diagnosis but also facilitates environmentally friendly agriculture by minimizing unnecessary pesticide application and allowing for targeted intervention2.This framework provides a non-invasive and automated tool for proactive RPW monitoring, enhancing efficiency in agricultural pest management and supporting environmentally friendly intervention strategies.

Traditional Red Palm Weevil (RPW) detection methods involve mostly manual detection, visual survey, and the application of pheromone traps, with the tendency of cause delayed and incorrect identification. They are exhaustive, time-demanding, and highly dependent on the skillfulness of field officers. They also have limited utility in the beginning stages of infestation, during which the occurrence of the weevil is hardly detectable3.The conventional methods also fail in large-scale monitoring situations because they have limited capacity to monitor huge areas and because RPW damage is complex in nature and often goes unnoticed until extensive damage is caused. Conversely, contemporary methods, including the machine learning model utilized in this project, provide quicker, more precise, and automated RPW detection from thermal images and extract significant features such as entropy, edge density, and shape complexities, which are more efficient and trustworthy for initial detection and extensive monitoring. Several studies have proposed different techniques for the detection of Red Palm Weevil (RPW) infestation in palm trees, highlighting the importance of early diagnosis techniques4. Detection mechanisms proposed are either image processing-based5 acoustic sensing and processing-based6,7. proposed a machine learning-based approach for the detection of the most prevalent palm tree diseases, for instance, RPW pests, leaf spots, and blight spots, based on a thermal image database. Various image processing methods were used in an attempt to retrieve images with a comparative analysis later conducted on the Support Vector Machine (SVM) and the VGG-based Convolutional Neural Network (VGG-CNN) classifiers. From their findings, it was found that while the SVM had a 92.8% accuracy, the VGG-CNN recorded a 97.9% detection accuracy. Alsanea et al.8 proposed a real-time RPW pest detection and localization model based on images. Utilising a Region-based CNN (R-CNN) for localization and a CNN for feature extraction and classification, they achieved a 100% detection rate on the tested dataset. Al-Saqer SM and Hassan GM explained using two supervised models of learning, Scaled Conjugate Gradient and Conjugate Gradient with Powell/Beale restarts appropriately to detect RPW from an image database. Their 3-layer ANN models were able to detect an accuracy of 93% and 93.5% respectively. The shortcomings of existing algorithms in detection techniques are well addressed in Table 1.

Table 1 Comparative literature review: strengths and limitations of existing Methods.

To provide stronger comparative results, Table 1 highlights the strengths and limitations of prior RPW detection methods. But a broader performance interpretation is essential to contextualize the novelty of the proposed system. Earlier methods such as visual surveys and pheromone traps, while low-cost, failed to support early stage detection. Machine learning and deep leaning models such as SVM, Random Forest, VGG reported moderate accuracy, but they often suffered from computational complexity, high deployment cost, and limited scalability for real time field applications. The shortcoming of the traditional RPW detection methods such as delayed diagnosis, high labor dependency, and reduced reliability during early infestation make them time consuming, error prone and ineffective in identifying internal damage at its initial stages. To address this research gap, the present work proposes a smart IoT based thermal imaging approach designed for early and accurate detection of RPW infestation.

The novelty and main contributions of the proposed work are highlighted as follows:

  • The system is implemented on a Raspberry Pi 4b, providing a cost effective, real time and scalable solution suitable for direct field deployment, an advantage over many existing laboratory based approaches.

  • The proposed thermal-CNN framework demonstrates strong potential for practical in-field pest monitoring and can be extended to other agricultural bio-sensing and early diagnosis applications, making it a versatile tool for precision agriculture.

System framework

The architecture of the proposed Red Palm Weevil (RPW) detection system has been designed with an image acquisition stage, where thermal images of palm trunks are captured using a USB based long-wave infrared thermal camera module interfaced with a Raspberry Pi.The images are then organised into a dataset with labels such as RPW infested or healthy (Non-RPW) to support supervised learning as shown in Fig. 2.

Fig. 2
figure 2

Proposed methodology for RPW detection Using Image Acquisition and Classification.

Thermal camera specification

Thermal images were captured using a compact USB type infrared thermal camera in Fig. 3. The device operates in the long-wave infrared (LWIR) spectral range of 8–14 μm, suitable for detecting biological heat anomalies inside palm trunks. The module provides a thermal resolution of 320 × 240 pixels, a temperature detection range of −4℃to 300℃and supports USB 2.0 plug and play connectivity.The camera was calibrated using two point calibration with ambient and warm reference surfaces.Temperature normalization to ensure consistent, reproducible thermal readings across environmental conditions. This calibration ensured high reliability of thermal signatures during in-field RPW detection.Table 2 presents the detailed specifications of the thermal camera.

Fig. 3
figure 3

Thermal image camera.

Table 2 Specification of thermal Camera.

In the pre-processing stage, thermal images are enhanced and standardised through re-sizing, noise removal, contrast and brightness adjustment, and data augmentation through rotation, flipping or scaling to increase data-set diversity.

Ambient Temperature Normalisation:

A field temperatures vary with time of day and environmental heat sources, each thermal frame undergoes normalisation using the formula.

$$TN=(T_{pixel}-T{ambient})/(T_{ref}-T{ambient})$$
(1)

Equation 1 shows the ambient temperature is estimated using neighborhood averaging. Where Tambient was estimated through neighborhood averaging and Tpixel is the range of actual values.Tref is the normalisation highlights minute temperature deviations caused by RPW larval movement. Although thermal images are unaffected by visible light, environmental factors such as sun exposure, bark heating and wind induced temperature fluctuation introduce noise. The attribute selection identifies and extracts the most relevant features such as color, texture, and shape based features to improve classification accuracy. The data-set is then split into training and testing subsets, typically with 70% of the data for training and 30% for testing and validation. The CNN model is applied to train the model. The trained model performs RPW detection on unseen images by analyzing extracted features and predicting the presence or absence of RPW, producing the final label of RPW or Non RPW to indicate the status of the palm. The proposed detection system integrates image processing and deep learning for accurate identification of RPW activity. It consists of four main stages: data Acquisition, Feature extraction, Model Training, Model Evaluation and Classification shown in Fig. 3.

Data acquisition

The image data that has been recorded from the field. A locally maintained data-set comprising approximately 6000 RPW and Non-RPW thermal images is used for training and testing, enabling great performance.RPW thermal image samples are extracted from test databases with both infested and non-infested tree recordings.Thermal images were captured at a fixed sensor to trunk distance of 1.2–1.5 m during early morning (6.30–8.30.30AM) to ensure stable ambient conditions.Image of Red Palm Weevil (RPW) and Non-RPW activity were collected from three different trunk locations of coconut trees showing potential signs of infestation, with the kind support of Mr.Thangadurai, on a 17-acre coconut grove in Orathanadu, Thanjavur District. Figures 4a, b and c and 5a, b and c shows the thermal images of RPW infested trunk showing axial and bottom views.

Fig. 4
figure 4

Placement of thermal camera in three different locations. a On the tunnel opening of trunk. b On the leaf axis. c On the crown of tree trunk

Fig. 5
figure 5

Capture of thermal camera image in three different locations. a On the tunnel opening of trunk. b On the leaf axis. c On the crown of tree trunk.

Feature extraction

The raw thermal images were preprocessed and filtered to enhance their quality before feature analysis. It extracts important features like entropy, edge density, contrast, correlation, energy, homogeneity, shape complexities, and texture features like Haralick and HHOG. These features are very important in determining minute differences between healthy and infested palms.

Model training

The extracted features are applied to train a machine learning model, a convolutional neural network (CNN) that can learn intricate patterns using the features14. The CNN was trained using a curated dataset of 6000 thermal images to identify characteristic temperature irregularities and texture deviations present in infested palms.The dataset of 6,000 thermal images was split into 70% training, 15% validation, and 15% test sets. Random sampling was used without stratification to create these splits.

Model evaluation

The trained model is tested and validated against classical techniques for RPW detection in order to assure its accuracy and resilience in RPW detection across infestation stages. The model performance is checked in terms of various metrics including accuracy, precision, recall, and F1-score. Through the utilization of these extracted features, the model is able to discriminate between infested and healthy palms efficiently and provide rapid accurate detection. This is an AI-based method that facilitates early identification and can be used for bulk monitoring in agronomic settings.

RPW detection pipeline using CNN

The proposed CNN architecture for RPW detection from thermal images comprises several key components. The input layer accepts pre-processed thermal images of size 224 × 224 × 1. Convolutional layers then perform feature extraction by applying multiple learn-able filters to detect local patterns such as temperature anomalies.The proposed CNN model consists of four convolutional blocks, each containing a Conv2D layer followed by ReLU activation and Max-Pooling. The first block uses 32 filters, the second 64 filters, the third 128 filters, and the fourth 256 filters, all with a kernel size of 3 × 3.To reduce over-fitting, a dropout layer (rate = 0.3) is added after the third and fourth blocks as shown in Figs. 6 and 7.

Fig. 6
figure 6

Residual learning architecture with strided convolutions and fully connected classifier.

A ReLU activation function introduces non-linearity after each convolution, enabling the network to learn complex patterns.Pooling layers reduce the spatial dimension of the feature maps while retaining the most prominent features. The fully connected (FC) layer maps the high level learned features into a lower dimensional output. A soft-max layer converts the raw logistic into class probabilities for RPW or Non-RPW and finally the classification layer outputs the predicted class, indicating whether the palm tree is infested or healthy.The total number of trainable parameters in the model is 1.32 million, making it lightweight and suitable for real-time deployment.

Fig. 7
figure 7

CNN Architecture for RPW/Non-RPW Image Classification.

The following essential elements make up the CNN architecture for RPW detection from thermal images:

Input layer

Accepts preprocessed thermal images of size 224 × 224 × 1224 times 224 times 1224 × 224 × 1.

Layers of Convolution

Apply several learnable filters to extract features, such as temperature anomalies.

Function of Activation (ReLU)

After convolution, non-linearity is introduced: f(x) = max(0,x).

Layers of pooling (Max pooling)

While keeping the most noticeable features, reduce the feature maps’ spatial dimensions.

Layer fully connected (FC)

Converts the learned high-level features into an output space with fewer dimensions.

The softmax layer

Transforms into RPW or non-RPW class probabilities.

Layer of classification

produces the final class forecast.

The pre-processed thermal images are fed into a specialized convolutional neural network (CNN) designed to learn hierarchical feature representations of the input data. The convolutional layers recognize regional spatial patterns, such as thermal anomalies, textual inconsistencies, and contour features indicative of pest infestation. Pooling layers, typically max pooling, reduce the spatial dimensions of feature maps while retaining dominant features15. The resulting feature maps are then flattened and passed through fully connected layers to learn high level abstractions with ReLU activation functions applied after each convolution to introduce nonlinearity. The final output layer using either a sigmoid or softmax classifier depending on whether the task is binary or multi-class, provides the probability of RPW infestation in the image16.

Fig. 8
figure 8

Proposed CNN framework for Image Classification.

Figure 8 shows the proposed CNN flowchart.The CNN is trained on a curated dataset of 6000 thermal images (4500 RPW and 1500 Non RPW) using binary cross entropy loss, the Adam optimizer with a learning rate of 0.0001, a batch size of 32 and for 50 epochs. To interpret the CNN decisions and visualize the most influential regions, Grad CAM (Gradient weighted class Activation Mapping) is employed. To improve model interpretability, Grad-CAM(Gradient-Weighted Class Activation Mapping) was employed to visualize the regions that contribute most to the CNN’s decision.However, we plan to incorporate Grad-CAM methods in future work to improve transparency.

Results and discussion

The proposed RPW detection model processes thermal image data through a spectrogram based feature extraction pipeline and classifies them using a Convolutional Neural Network (CNN). The model is trained using categorical cross-entropy loss and optimized with the Adam algorithm. The deep learning-based framework effectively distinguishes RPW infested samples from non-infested ones, thereby supporting early pest monitoring and control. Table 3 presents the dataset statistics, and the class distribution is illustrated in Fig. 9.

Fig. 9
figure 9

Dataset Class Percentages.

Table 3 Dataset Statistics.

Data acquisition and filtering

Thermal images were captured under controlled environmental conditions to ensure consistency and reliability. The dataset includes16 thermal signatures from both RPW-infested and healthy palm trees. These images form the foundation for subsequent pre-processing steps, including filtering and segmentation as illustrated in Figs. 10 and 11.

Fig. 10
figure 10

Thermal-based RPW classification result.

Fig. 11
figure 11

Thermal image enhancement and segmentation result.

This work seeks to investigate the performance of an artificial intelligence-based detection system for recognizing images of the Red Palm Weevil (RPW) and classifying palm trees as healthy (Non-RPW) or infested (RPW). To present a balanced assessment, the RPW classification model was trained on a dataset consisting of RPW samples and Non-RPW samples. Table 4shows the infested and non infested thermal images of coconut palm. A brood of Red Palm Weevil larvae was discovered deep inside the coconut trunk, indicating an advanced stage of infestation. Thermal imaging successfully identified the presence of a brood of larvae within the infested palm region. The brood of RPW larvae feeds continuously on the internal tissues, causing severe structural damage to the tree. Early detection of the brood helps in preventing the spread of infestation to nearby palms. The suggested CNN-based model is assessed on the basis of standard performance metrics, such as accuracy, precision, recall, and F1-score. Complete details on the dataset utilized as well as the evaluation outcomes of the suggested model will be elaborated in the following sections17.

Table 4 Temperature variation analysis cross RPW life stages and stem conditions.

Evaluation results of deep learning method

In the current research work, the raw thermal images are subjected to a sequence of preprocessing operations, i.e., resizing, conversion into grayscale images, and normalization. The preprocessed thermal images are subsequently input into a Convolutional Neural Network (CNN) to ascertain the presence or absence of RPW infestation. The CNN structure is designed to automatically identify spatial features such as thermal intensity patterns and temperature anomalies for RPW infestation.

Table 5 Quantitative analysis of CNN model Metrics.

Table 5 shows the performance of the model was tested through a classification report and accuracy scores. The CNN model was performing with 98.5% total accuracy, proving to be exceedingly efficient in classifying RPW and Non-RPW. Visual inspection techniques and the classical image-based diagnostic detection techniques of RPW infestation have some important limitations. These are late diagnoses usually only after extensive internal damage has already been done—inability to access real-time data in field conditions, and a high incidence of false positives due to visually mimetic diseases or abiotic effects like sunlight-heated heat or other abiotic stresses. These limitations make classical visual examination futile in the context of early intervention, which is imperative in an attempt to control RPW infestation prior to extensive, irreversible harm being inflicted on the vascular system of the tree. The proposed CNN overcomes these limitations by learning temperature based structural features.To further strengthen the evaluation, additional performance analyses were incorporated. A confusion matrix was generated to visualize class-wise prediction distribution, clearly highlighting the true positive, true negative, false positives and false negatives for both RPW and Non RPW categories. Furthermore, the model’s discriminative ability was quantified using the Receiver Operating Characteristic (ROC) curve and the corresponding Area Under the Curve (AUC), achieving an AUC value of 0.99. A precision recall curve was also included to provide deeper insight into model performance under class imbalance conditions. To ensure robustness, 5 fold cross validation was performed. Additionally, an error analysis was conducted to identify common false positives and false negatives. False negatives mainly occurred in images with weak thermal contrast, while false positives were mostly due to sun exposed regions or environmental heat reflections. Representative misclassified samples were evaluated to aid future dataset improvements.

This disparity results from the fact that validation accuracy and validation loss assess different facets of model performance. While loss measures the degree of confidence in the model’s predictions, accuracy only shows the percentage of correctly classified samples. In this research, a number of predictions were made with high probability spread across classes, despite the fact that 98.5% of the samples were correctly classified.This explains the elevated validation loss because it leads to a higher loss value even though the classification is correct.To do this, thermal imaging has been a promising non-invasive approach. RPW activity produces localized internal metabolic heat that produces minute surface-level thermal abnormalities. These are generally invisible to the naked eye but are detectable with high-resolution infrared detectors. Much previous image-based work used hand-crafted features or generic machine learning classifiers, which could not generalize over a large range of thermal environments18. Some more recent approaches used deep learning and transfer learning models but were over-fit or not domain-specifically optimized for thermal data. Table 6 shows the comparison of different classifiers.

Table 6 Performance comparison of different Classifiers.

In contrast to these approaches, our method employs a specifically tailored Convolutional Neural Network (CNN) trained end-to-end from thermal palm stem images to automatically learn low-level and high-level thermal features typical of RPW infestation. Pre-processing techniques such as bilateral filtering, histogram equalization, and Otsu’s thresholding were employed to enhance image quality and separate the region of interest19. During training, the precision (99.2%) was higher than the recall (87.6%),indicating a mild class imbalance in the dataset. Although the model achieved a validation accuracy of 98.5%, the validation loss(0.97) was relatively higher. The 5 fold cross validation procedure, where the data set is randomly shuffled, divided into five equal subsets, and iteratively trained on four folds while the remaining fold is used for validation. The process repeats five times and the final performance is computed as the average of all evaluation metrics.

Overview of process.

Shuffle the dataset to ensure random distribution of samples.split into five equal folds [20].

Iteration 1

Train on folds 1,2,3,5 and validate on fold 4.

Iteration 2

Train on folds 1,4,3,5 and validate on fold 2.

Iteration 3

Train on folds 1,2,4,5 and validate on fold 3.

Iteration 4

Train on folds 1,3,4,5 and validate on fold 2.

Iteration 5

Train on folds 2,3,4,5 and validate on fold 1.

Compute the performance metrics from all five iterations.Average the results to obtain the final model performance.The proposed CNN model performed better than the baseline classifiers and popular pre-trained models such as VGG16, Random forest, SVM on precision, recall, and F1-score.Moreover, model interpretability was boosted using the Grad-CAM technique, which indicated the particular thermal areas contributing to the classification decision21. This provided for visual validation and made the model’s outputs more explainable to plant pathologists and agricultural experts. Figure 12 shows the confusion matrix of the classifier.The ROC of the RPW and Non RPW are included in Figs. 13 and 14. Figure 15 shows the bar chart comparing model performance for RPW detection.

Fig. 12
figure 12

Confusion Matrix of CNN Classifier.

Fig. 13
figure 13

ROC curve for RPW thermal image.

Fig. 14
figure 14

ROC curve for Non RPW thermal image.

Raspberry Pi experimental setup

The Raspberry Pi system is being powered with the standard 5 V power supply that plugs directly into its USB-C port22. This provides enough voltage and current to run the Raspberry Pi 4B along with all peripheral devices attached to it. The USB mouse and USB keyboard that will be used to interact with the system connect into the existing USB ports present on the Raspberry Pi in Fig. 16. Model inference on the Raspberry Pi 4B required a compact model size (≈ 5–25 MB) with memory optimized using INT8 quantization (8 bit integer values), consumed about 4–8 W, and raised the temperature by 10–30 °C.

Fig. 15
figure 15

Performance benchmark of RPW detection algorithms.

The input devices enable you to move about the graphical user interface, input commands, and manage any installed applications or programs running on the Pi, your red palm weevil detection being one of them23. A cell phone or a USB camera is also plugged into one of the USB ports. This tool becomes an important factor in your project for detecting the red palm weevil. It either works as a camera to take photos of palm trees and possible weevil infestations or is employed to transfer previously taken photos for processing24. The pictures are then processed by machine learning algorithms, like Convolutional Neural Networks (CNN) or Support Vector Machines (SVM), that directly run on the Raspberry Pi 4b25. The HDMI port of the Raspberry Pi is utilized to output video. Because the monitor is only VGA-capable, an HDMI-to-VGA adapter is used in the connection. It allows the Raspberry Pi’s desktop environment to be displayed on the monitor, where you can see image processing output, detection outputs, and system logs in real time26. In total, this hardware setup converts the Raspberry Pi into a small but powerful embedded AI system that can identify red palm weevil infestations through image analysis methods as shown in Figs. 17 and 18.

Fig. 16
figure 16

Hardware integration and test setup using Raspberry Pi 4B.

Fig. 17
figure 17

Physical implementation of Raspberry Pi 4B- based detection system.

In addition to the physical setup, quantitative hardware performance metrics are essential for validating the practical feasibility of the Raspberry Pi based implementation. Important parameters such as inference latency (the time taken from image input to classification output), model size in megabytes, memory footprints during execution, and the energy consumption or heat generation of the Raspberry Pi during continuous inference should be evaluated and reported27. The inference latency defined as the time interval between acquiring an input image and generating the predicted label was recorded on the Raspberry Pi 4B. The non-quantanized model required an average of ~ 412ms per image, corresponding to an effective throughput of approximately 2.4 FPS. Including these measurements strengthens the claim that the system is suitable for real-time field deployment.

Fig. 18
figure 18

Hardware and software testbed for RPW detection.

Figure 19a and b shows the verification of Red Palm weevil infestation through stem dissection. The infestation status of each palm was verified using a systematic and reliable ground process. First, trained agricultural specialists examined the trees for visible signs of Red Palm Weevil (RPW)activity, including boreholes, frass deposits, crown collapse, and early tissue decay. When visual inspection suggested probable infestation, selective stem dissection was carried out to directly confirm the presence of larvae, pupa or internal feeding tunnels. In cases where destructive sampling was not possible, non-invasive acoustic monitoring was used to detect the distinctive feeding and movement sounds produced by RPW larvae inside the trunk.

Fig. 19
figure 19

Validation of infestation status. a Verification of Red Palm Weevil Infestation through stem dissection. b Dissected trunk containing emerging grub from hidden holes.

The real-time Red Palm Weevil (RPW) detection is integrated into the experimental setup. Thermal images are shown on the monitor, demonstrating the detection outcomes of the suggested deep learning based model that blends image classification with thermal feature extraction28. The Raspberry Pi acts as the data acquisition module, sending real-time image data (taken by a connected camera) to the PC via HDMI and USB29. It processes the data using CNN model to classify and display outcomes such as “Detected RPW”. Raspberry Pi connected to a smart phone, which acts as a portable input source for capturing RPW related images in field conditions.

Conclusion

This study introduced a CNN-based deep learning framework for early detection of Red Palm Weevil infestations using thermal imaging. Unlike conventional image-based approaches that depend on visible symptoms, our model leverages raw thermal data processed and enhanced through a dedicated pipeline to identify temperature anomalies caused by RPW larvae activity before severe damage occurs.Carefully curated thermal images, along with extensive pre-processing and augmentation, enabled the training of a robust CNN architecture that achieved a 98.5% accuracy rate, outperforming both traditional machine learning and popular deep learning baselines. By eliminating reliance on post-infestation symptoms and enabling early-stage intervention, this thermal imaging-based approach offers a critical advantage in managing RPW outbreaks. It supports the use of site-specific pest control strategies, reducing dependence on blanket pesticide applications and promoting environmentally sustainable agricultural practices.The key findings of this improved RPW detection framework is that it reduced false positives when compared with acoustic algorithms. The proposed model achieved a high accuracy of 98.5%, demonstrating strong detection capability, However, the study has limitations. Model validation was conducted only on a restricted palm species and within a single geographic region, which may affect generalizability. Additionally, environmental factors such as humidity, surface moisture, and seasonal temperature variations may influence thermal patterns and detection consistency.These constraints highlight the need for broader validation and environmental adaptation.

Limitation and future scope

Although the proposed framework demonstrated promising results under controlled testing conditions, several limitations remain.The performance of thermal based detection may vary with external environmental factors such as humidity, ambient temperature, direct sunlight exposure, and seasonal fluctuation, which can influence thermal contrast between healthy and infested palm trees.Additionally, the dataset used in this study was limited to a specific palm species and geographic region. Therefore, further validation across multiple palm varieties and diverse climatic conditions is required to ensure generalizability.Future research may explore the integration of multimodal sensing, combining thermal imaging with acoustic signatures or biochemical markers to enhance early detection sensitivity.Incorporating the model into UAV platforms or IoT enabled smart farming system may support scalable, automated, and continuous monitoring at the plantation level. Expanding the dataset, improving environmental compensation algorithms and optimizing the model for low power devices will further strengthen the applicability of the system for precision agriculture.The proposed RPW detection system aligns with key Sustainable Development Goals (SDG), particularly SDG 2 (Zero Hunger) by protecting crops and preventing yield losses, SDG 12(Responsible Consumption and Production) by enabling targeted and sustainable pest management. Through early and non-destructive detection, the system contributes to sustainable agricultural practices and long term environmental protection.