Table 1 Inferences from the literature review.

From: Deep atrous context convolution generative adversarial network with corner key point extracted feature for nuts classification

Methodology

Inference and advantage

Pitfalls

ML Methods:

• MLP ANN2, SVM7 for cashew kernel classification.

• PCA3, Decision tree11, PCA11 for Almond Classification

• SVM12, PCA with ANN16, Random forest33 for nut classification

• Gradient boosting20, random forest20, ANN with discriminant analysis21,24 for hazelnut, chestnut29 SVM23 for pinenut classification

• Achieved up to 93–94% accuracy for various nut types.

• Lightweight and efficient for smaller datasets.

• Suitable for feature-based classification with structured data.

• Require labeled data and sensitive to imbalance.

• Limited adaptability to new or noisy data.

• Lack automatic feature extraction.

Image Processing Methods:

• Otsu thresholding4 for Almond Classification

• Improves KNN13, Otsu method19, Slime Mould Algorithm28 for pistachio nut classification

• Big transfer model25 for hazelnut classification

• KMeans++34, SVM + HOG39 for nuts type, areca nuts36,38 Classification.[35]

• Achieved 85–97% accuracy depending on dataset.

• Enable extraction of texture, color, and shape features.

• Effective for preprocessing and visual feature isolation.

• Dependent on image quality and lighting.

• Sensitive to noise and orientation.

• Limited scalability for large datasets.

CNN Methods:

• Inception-V31, ResNet501, VGG-161, YoloV55, Sequential CNN37 for cashew kernel classification.

• Sequential CNN4,8, conv2D CNN6, optimized Deep CNN with flower pollination algorithm9, DenseNet10, EfficientNetB010, MobileNet10, MobileNet V210, NASNetMobile10 for Almond Classification

• AlexNet14,27,31,40, VGG1614,16,27,30,31,40, ResNet2015,16,18,27,32,40, InceptionV330,31,32,40, DenseNet15,40, Adaptive CNN41 for pistachio nut, peanut classification

• EfficientNet17 and InceptionV317, ResNet50 with feature reduction18, DL4J feedforward20, Sequential CNN22, AlexNet26 for hazelnut kernels classification.

• Ac35hieved 94–99% accuracy across datasets.

• Capable of automatic feature learning and multi-level representation

• Highly generalizable with augmentation and transfer learning.

• Require large datasets and high

computation.

• Prone to overfitting with small data.

• Sensitive to hyperparameter settings.