Table 2 Notations used in the proposed System.

From: AI-driven drone technology and computer vision for early detection of crop disease in large agricultural areas

Symbol

Description

\(\:I\left(x,y,\lambda\:\right)\)

Input image where \(\:x\) and \(\:y\) represent spatial dimensions, and \(\:\lambda\:\) is the wavelength.

\(\:{I}_{filtered}\left(x,y\right)\)

Pre-processed (filtered) image after noise reduction.

\(\:G\left(u,v\right)\)

Gaussian kernel used for noise reduction.

\(\:k\)

Kernel size for the Gaussian filter.

\(\:{F}_{i,j}^{l}\)

Feature map at position \(\:i,j\) in the \(\:i\) -th CNN layer.

\(\:{W}_{m,n}^{l}\)

Weights of the convolution kernel in the \(\:l\) -th CNN layer.

\(\:{b}^{l}\)

Bias term in the \(\:l\) -th CNN layer.

\(\:\sigma\:\)

Activation function (e.g., ReLU).

\(\:Q,K,V\)

Query, key, and value matrices used in the Transformer attention mechanism.

\(\:{d}_{k}\)

Dimension of the key vectors in the attention mechanism.

\(\:Attention\left(Q,K,V\right)\)

Attention output combining query, key, and value matrices.

\(\:{z}_{c}\)

Logit (raw output) for class \(\:c\).

\(\:p\left(C=c|F\right)\)

 

\(\:\widehat{C}\)

Predicted class based on the highest probability.

\(\:S\left(t\right)\)

Sensor data at time \(\:t\), consisting of readings \(\:\left\{{s}_{1}\left(t\right),{s}_{2}\left(t\right),\dots\:,{s}_{n}\left(t\right)\right\}\)

\(\:{s}_{i}\left(t\right)\)

Reading from the\(\:\:i\) -th sensor at time \(\:t\).

\(\:D\left(x,y\right)\)

Disease severity at spatial location \(\:\left(x,y\right)\).

\(\:{d}_{i}\)

Disease intensity at the \(\:i\) -th data point.

\(\:{w}_{i}\)

Interpolation weight for the \(\:i\) -th data point.

\(\:N\)

Total number of data points used in spatial interpolation.