Fig. 3: Neural network geometry for every model \({{{{{{{{\mathcal{M}}}}}}}}}_{i}\).
From: Deep learning forecast of rainfall-induced shallow landslides

Grey circles are neurons arranged in four layers. Blue, input layer 0; green, hidden layers 1 & 2; red, output layer 3. Neurons in the hidden (green) layers are activated by a tanh function. Output (red) layer is activated by a sigmoid function h. \({{{{{{{{\rm{a}}}}}}}}}_{n}^{k}\) is neuron n in the k layer, \({\beta }_{n}^{[k]}\) in β[k] is the bias added to the n neuron in the k layer, and \({\underline{\theta }}_{n}^{k}\) in θ[k] is the weight array of the neurons in the k − 1 layer in the n neuron of the k layer. τ stands for transpose.