Table 4 Algorithm for SpinachXAI-Rec: AI-based spinach freshness classification and recommendation.
Stage | Algorithmic logic |
---|---|
Input Acquisition | Load raw image dataset D with 4005 images labeled across 6 spinach classes (Fresh/Non-Fresh × Malabar, Red, and Water Spinach) |
Data Augmentation | Apply augmentation techniques: RandomRotate90, Flip, BrightnessContrast, GaussNoise, HueSaturation, ElasticTransform to produce dataset D′ |
Preprocessing & Splitting | Resize all images to 224 × 224 pixels; convert color space from BGR to RGB; normalize intensity values; split D′ into Dtrain_and Dtest in a 70:30 ratio |
CNN Model Training | Train ResNet50, EfficientNetB0, and DenseNet121 using Dtrain ; evaluate performance on Dtest using accuracy and loss metrics |
Best CNN Selection | Select DenseNet121 as base CNN model MCNN based on superior classification accuracy and convergence stability |
Feature Extraction | Extract feature embeddings F from the bottleneck layer of MCNN for all samples in D′ |
Transformer Fusion | Train three models—XGBoost, Swin Transformer, and ViT-B/16—on features F; evaluate their performance for fine-grained spinach classification |
Best Hybrid Model Selection | Select DenseNet121 + ViT-B/16 as final hybrid model Mhybrid based on comparative performance metrics |
Multiclass Classification | Train a Multiclass SVM classifier using the output embeddings of Mhybrid use it for final class prediction across six spinach categories |
Explainability Integration | Apply GradCAM++ to visualize important regions from DenseNet121 layers; use LIME to generate local explanation maps for final predictions |
Clinical Recommender System | Define rule: IF class is ‘Non-Fresh’ or confidence ≤ 0.60 → Not Eatable; ELIF 0.60 < confidence ≤ 0.85 → Eatable with Caution; ELSE → Eatable |
Final Output | Return predicted class label, confidence score, GradCAM + + and LIME visualizations, and final eatability decision |