Figure 4

Architecture of multilayer perceptron (MLP). This deep neural network consists of an input layer, three fully connected hidden layers with nodes numbering 32, 16, and 1, and an output layer. When a series of training samples is presented to the network, a loss function measures the inaccuracy of the computed prediction. All parameters are then slightly updated in the direction that will minimize the error, a process called back-propagation.