Table 3 Mathematical equations for ML Models.
Model | Equation |
---|---|
Logistic Regression | \(\:P\left(Y=1\mid X\right)=\frac{1}{1+{e}^{-\left({\beta\:}_{0}+\sum\:_{i=1}^{n}{\beta\:}_{i}{X}_{i}\right)}}\) |
Random Forest | \(\:y=\frac{1}{T}\sum\:_{t=1}^{T}{f}_{t}\left(X\right)\) |
Gini: \(\:G=1-\sum\:_{i=1}^{c}{p}_{i}^{2}\) | |
Entropy: \(\:H=-\sum\:_{i=1}^{c}{p}_{i}{\text{log}}_{2}\left({p}_{i}\right)\) | |
Support Vector Machine (SVM) | \(\:\underset{w,b}{min}\frac{1}{2}\left|\right|w\left|{|}^{2}\text{s.t.}{y}_{i}\right(w\cdot\:{X}_{i}+b)\ge\:1,\forall\:\) |
Kernel Trick: \(\:K\left({X}_{i},{X}_{j}\right)={e}^{-\gamma\:\left|\right|{X}_{i}-{X}_{j}|{|}^{2}}\) | |
Gradient Boosting Machine (GBM) | \(\:{F}_{m}\left(X\right)={F}_{m-1}\left(X\right)+{\gamma\:}_{m}{h}_{m}\left(X\right)\) |
\(\:{\gamma\:}_{m}=\text{arg}\underset{\gamma\:}{min}\sum\:_{i=1}^{n}L\left({y}_{i},{F}_{m-1}\left({X}_{i}\right)+\gamma\:{h}_{m}\left({X}_{i}\right)\right)\) | |
Neural Network | \(\:Z={W}_{1}X+{b}_{1}\) |
\(\:A=\sigma\:\left(Z\right)=\frac{1}{1+{e}^{-Z}}\) | |
\(\\hat :{y}={W}_{2}A+{b}_{2}\) | |
Weight Update: \(\:W\leftarrow\:W-\eta\:\frac{\partial\:L}{\partial\:W}\) |