Table 2 Comparison of various DL algorithms.

From: A comparative machine and deep learning approach for predicting ultimate bearing capacity of shallow foundations in cohesionless soil

Algorithm

Architecture

Strengths

Weaknesses

Use sases

Performance

ANN82,83,84

Input layer, one or more hidden layers, output layer. Fully connected

Versatile and flexible. Can model complex, non-linear relationships

Computationally expensive. Prone to overfitting. Black-box nature

General-purpose tasks like classification, regression, and clustering

High accuracy for small to medium datasets, but slow for large datasets

DNN85,60

Multiple hidden layers between input and output. Fully connected

Hierarchical feature learning. State-of-the-art performance in many domains

Computationally expensive. Requires large datasets. Hard to interpret

Image recognition, NLP, speech recognition, and complex pattern recognition

State-of-the-art performance for large datasets, but resource intensive

CNN59,60

Convolutional layers, pooling layers, fully connected layers. Local connectivity

Excellent for spatial data (e.g., images). Reduces parameters via weight sharing

Computationally expensive. Requires large datasets. Limited to grid-like data

Image classification, object detection, video analysis, and medical imaging

State-of-the-art performance for image and video data

RNN59,61

Recurrent connections with loops. Hidden state to capture temporal dependencies

Handles sequential data. Models temporal dependencies effectively

Suffers from vanishing/exploding gradients. Computationally expensive. Computationally expensive

Time-series forecasting, NLP, speech recognition, and video analysis

High accuracy for sequential data, but slower than CNNs and FFNNs

FFNN62,63

Input layer, hidden layers, output layer. No cycles or loops

Simple and easy to implement. Handles static data well

Cannot model sequential data. Prone to overfitting. Limited to small datasets

Classification, regression, and pattern recognition for static data

Good for small datasets but struggle with large or sequential data