Table 2 Summary of HAMF Pipeline Steps.

From: Hybrid MLOps framework for automated lifecycle management of adaptive phishing detection models

Step

Title

Purpose

Key tools

Outputs

1

Model & Asset Registration

Register models, datasets, features

MLflow, DVC, ARX

Baseline model trigger, dataset versioning

2

Data Ingestion & Preprocessing

Acquire and clean phishing data

Spark, PostgreSQL, Elasticsearch

Cleaned dataset, monthly snapshot

3

Feature Engineering & Optimization

Transform raw data into predictive features

SHAP, Feature Store

Ranked, transformed feature set

4

Training & Evaluation

Train and validate models using engineered features

TensorFlow, Scikit-learn, MLflow

Trained model, SHAP plots, evaluation metrics

5

Model Registry & Versioning

Track model lineage and deployment readiness

MLflow, MinIO, DVC

Registered model with metadata and versioning

6

Model Serving

Deploy models as RESTful endpoints

FastAPI, BentoML, Kubernetes

Live endpoints with telemetry and API schema

7

Monitoring & Drift Detection

Detect drift and monitor performance metrics

Prometheus, Grafana, Alibi Detect

Drift alerts, SHAP deltas, dashboard logs

8

Performance Thresholds & Alerts

Trigger alerts for performance anomalies

Alertmanager, Grafana, Slack

Real-time alerts, retraining signals

9

Ethical Auditing

Evaluate fairness and compliance indicators

AI Fairness 360, SHAP

Fairness reports, bias flags, audit logs

10

Feedback Loop

Capture expert annotations and issue flags

Slack, Trello, DVC

Curated feedback, retraining candidates

11

Retraining Triggers

Initiate model updates from monitoring or feedback

Alibi Detect, Custom Logic

Triggered retraining workflows

12

Continuous Deployment (CI/CD)

Automate reproducible model rollout

GitLab CI/CD, Terraform, Helm

Updated models in production

13

Stakeholder Communication & Documentation

Notify stakeholders and maintain audit trails

Slack, Trello, BookStack

Logs, notifications, compliance documentation