Table 2 Summary of notations.

From: Optimizing brain stroke detection with a weighted voting ensemble machine learning model

Notation

Description

\(\widehat{{y}_{i}}\)

Final predicted output for the ith input sample

\({x}_{i}\)

The ith input data sample

\({T}_{n}{(x}_{i})\)

Predicted class label by the nth model or client for input \({x}_{i}\)

Summation overall N models or clients

\(\frac{1}{N}\)

Averaging factor to compute the mean prediction from all contributors

\(N\)

Total number of models or clients

\(mode()\)

Statistical mode function that returns the most frequent class label

\(L\left(\theta \right)\)

Total loss function with parameters \(\left(\theta \right)\)

\(n\)

Total number of data samples

\({\text{l}(\text{u}}_{i},{\widehat{u}}_{i})\)

Loss between ground truth \({\text{u}}_{i}\) and predicted output \({\widehat{u}}_{i}\)

Summation overall n training samples

Summation over all K model components

\(\Omega {(f}_{k})\)

Regularization term for the kth model component

\({(f}_{k})\)

Model parameters of the kth component

\(\left(\theta \right)\)

Overall set of model parameters

\({\gamma }^{T}\)

Bias or constant term related to iteration T

\(\lambda\)

Regularization coefficient

\(T\)

Total number of training iterations or time steps

\({w}_{j}^{2}\)

Model weight parameter at step j

\(\sum_{j=1}^{T}{w}_{j}^{2}\)

Sum of squared weights

\(\frac{1}{2}\lambda \sum_{j=1}^{T}{w}_{j}^{2}\)

L2 regularization term

\({f}_{k} {(x}_{i})\)

Output of the kth model when applied to input \({x}_{i}\)

\(K\)

Total number of models contributing to the aggregation

Summation over all K models

\({\widehat{Y}}_{k}\)

An estimated value at index k

Summation from j = 0 to j = n, so summing up n + 1 terms

\({Y}_{k}^{\left(j\right)}\)

Classifier

\({w}_{j}\)

Weight assigned to jth classifier