Table 1 Optimizing experimental outcomes.

From: Cattle identification based on multiple feature decision layer fusion

System specifications

The operating system is Ubuntu 22.04.2, the CPU is Intel(R) Core(TM) i7-7800X, the running memory is 64GB, the primary frequency is 3.50Ā GHz, and the GPUs are 2 NVIDIA GeForce RTX 2080 Ti.

experimental environment

CUDA version: 11.3

Python: 3.8.10

PyTorch:1.12.0

Parameter tuning methods

Grid search and cross-validation techniques were used to tune model parameters.

Model optimal hyperparameters. (Highlight the best indicators in bold)

Decision Trees(DT)

ā€˜max_depth’: [None, 10, 20, 30]

ā€˜min_samples_split’: [2, 5, 10]

ā€˜min_samples_leaf’: [1, 2, 4]

Bagging

ā€˜n_estimators’: [2,10,15,20]

ā€˜max_samples’: [0.5, 1.0]

ā€˜bootstrap’: [True, False]

Logistic

regression

(Lr)

ā€˜penalty’: [ā€˜l1’, ā€˜l2’]

ā€˜C’: [0.01, 0.1, 1, 10]

ā€˜solver’: [ā€˜liblinear’, ā€˜lbfgs’,ā€˜saga’]

Gradient_Boosting_Classifier

ā€˜n_estimators’:[50,100, 150]

ā€˜learning_rate’:[0.01,0.1,0.2]

ā€˜max_depth’: [3, 5,7]

Gaussian_NB

(GS)

ā€˜var_smoothing’:

[1e-9, 1e-8, 1e-7, 1e-6]

LightGBM

ā€˜n_estimators’:[50,100, 150]

ā€˜learning_rate’:[0.01,0.1,0.2]

ā€˜max_depth’: [3, 5,7]

Random_Forests(RF)

ā€˜n_estimators’:[50, 100, 150, 200]

ā€˜criterion’: [ā€˜gini’, ā€˜entropy’]

ā€˜max_depth’: [None, 10, 20, 30]

ā€˜min_samples_split’: [2, 5, 10]

ā€˜min_samples_leaf’: [1, 2, 4]

ā€˜max_features’:[None,ā€˜sqrt’,ā€˜log2’]

XGboost

ā€˜n_estimators’: [50,100, 150]

ā€˜learning_rate’:[0.01,0.1,0.2]

ā€˜max_depth’: [3, 5,7]

Voting Classifier

ā€˜voting’: [ā€˜hard’, ā€˜soft’]

ā€˜weights’: {ā€˜DT’:2, ā€˜LG’:4, ā€˜GS’: 1, ā€˜RF’: 3}

Stacking

ā€˜cv’: [5, 10]

ā€˜stack_method’: ā€˜auto’