Figure 10
From: Sea level variability and modeling in the Gulf of Guinea using supervised machine learning

Architecture of a Multi-Layer Perceptron (MLP) with Two Hidden Layers. The input layer consists of three nodes, representing the three input features. The network contains two hidden layers with 4 and 2 neurons, respectively. Each neuron in the hidden layers applies an activation function to the weighted sum of its inputs to introduce non-linearity and enable learning complex patterns. Finally, the output layer has a single node, which generates the predicted continuous value for regression tasks. The architecture of this MLP allows for the processing and transformation of input data through the feedforward process, leading to accurate predictions based on the provided inputs.