Abstract
State of Health estimation in lithium-ion batteries is critical for reliable operation in electric vehicles and energy storage systems. This work evaluates four deep learning models—Multilayer Perceptron, Gated Recurrent Unit, Long Short-Term Memory, and Temporal Convolutional Network for cycle-based SoH prediction using discharge data from the NASA B0005, B0006, and B0007 cells. SoH values were obtained by numerical integration of discharge current and normalized with respect to the initial capacity. All models were implemented in PyTorch and assessed using RMSE, MAE, and R² metrics. On B0005, the MLP achieved RMSE 0.0069, MAE 0.0049, and R² = 0.9955, with TCN showing similar accuracy. Results on B0006 and B0007 confirmed the stability of MLP and TCN predictions across different cells. Residuals remained tightly clustered, and loss curves indicated smooth convergence. GRU and LSTM required higher training time without accuracy improvements. MLP demonstrated the best balance of accuracy and computational efficiency, making it suitable for embedded battery management systems. TCN provided robust accuracy with moderate complexity. The results verify that data-driven deep learning methods can capture nonlinear degradation behavior consistently across multiple cells.
Introduction
Lithium-ion batteries are critical components in modern energy storage systems used in electric vehicles (EVs), grid-connected renewable energy systems, and portable consumer electronics due to their high energy density, efficiency, and long cycle life1. The accurate estimation of battery State of Health (SoH), defined as the ratio of current full charge capacity to its initial capacity, is vital for ensuring safety, longevity, and reliability2,3. SoH serves as a key metric in battery management systems (BMS), guiding decisions about operation, maintenance, and replacement4,5. Failures in accurate SoH estimation can result in unexpected battery failure or conservative operation that limits system performance6,7. Research in SoH modeling has therefore gained prominence across domains. Existing literature on battery SoH estimation methods encompasses physics-based models, empirical methods, and data-driven approaches. Physics-based models rely on electrochemical equations or equivalent circuit models but often require extensive parameterization and computational resources8,9. Empirical models like incremental capacity and differential voltage analysis can indicate degradation patterns but depend heavily on controlled test conditions10,11. Data-driven approaches, including machine learning and deep learning algorithms, have emerged as robust alternatives, capable of capturing nonlinear relationships between observable battery variables and health indicators12,13,14,15,16,17,18,19,20,21,22,23,24,25,26. These models include Random Forest, Support Vector Machines, Neural Networks, Convolutional Neural Networks, and Recurrent Neural Networks27,28,29,30,31,32,33,34,35,36,37,38,39,40,41. While effective in many cases, these models often face challenges in generalization, sensitivity to dataset scale, or require high computational overhead42,43,44,45,46,47,48,49,50,51.
A key challenge in SoH modeling is accurately capturing degradation patterns under diverse operational conditions and chemistries, which makes generalization across datasets and applications difficult. Recent works have proposed hybrid and advanced architectures to address this limitation. For instance, the SOH-KLSTM model integrates Kolmogorov–Arnold Networks with LSTM to improve temporal learning and candidate state representation for lithium-ion battery health monitoring52. Similarly, an integrated SOC–SOH estimation framework using GRU and TCN has been developed for whole-life-cycle prediction53. Beyond architecture-level innovations, efforts have also focused on real-world applicability, such as practical data-driven pipelines targeting field data challenges54 and comprehensive reviews of machine learning frameworks that highlight data requirements, feature engineering, and algorithmic trade-offs55. Other contributions include multiple aging factor interactive learning frameworks for enhanced SoH estimation56 and physics-enhanced joint SOC–SoH estimation tailored for high-demand applications like eVTOL aircraft57. Collectively, these studies demonstrate the push toward hybrid, interpretable, and generalizable models that balance computational efficiency with predictive robustness.
This study addresses these gaps by evaluating the performance of four deep learning models—Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Temporal Convolutional Network (TCN)—for estimating cycle-based SoH using real aging data from the NASA B0005 battery dataset58. SoH values are derived from the numerical integration of discharge current normalized against initial capacity to capture degradation across lifecycle stages. Each model is trained using PyTorch and evaluated using RMSE, MAE, and R² metrics. MLP achieved the highest accuracy with RMSE of 0.0069, MAE of 0.0049, and R² of 0.9955. TCN followed closely with RMSE of 0.0071 and R² of 0.9951. GRU and LSTM performed acceptably, though, with longer training durations.
This paper implements and evaluates a unified training framework to compare four deep learning architectures—Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Temporal Convolutional Network (TCN) for cycle-based SoH estimation. All models are trained and validated on the NASA B0005 dataset using normalized discharge capacity derived from current-time integration. The performance is measured using RMSE, MAE, and R² to ensure consistency and comparative clarity. Experimental analysis identifies MLP and TCN as highly effective for modeling degradation patterns with reduced complexity. The study contributes empirical insights toward selecting suitable models for battery health monitoring applications under real-world constraints, targeting integration into onboard diagnostics and predictive maintenance platforms59,60.
The NASA B0005 cell was analysed along with two other cells from the same dataset, B0006 and B0007, to assess external validity. These cells contain high-resolution cycle data suitable for the same preprocessing and modelling pipeline described in Sect. 2. The inclusion of multiple cells allows examination of whether model performance trends remain consistent across different but comparable ageing profiles.
Evaluation metrics and literature trends
Common evaluation metrics for SoH prediction include Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²). These are defined as:
where \(\:{y}_{i}\) and \(\:\widehat{y}i\) represent true and predicted SoH values, respectively. Table 2 summarizes representative deep learning approaches for lithium-ion battery SoH estimation. Zhang et al.61 developed a hybrid framework combining TCN, GRU, and wavelet neural networks, which achieved an RMSE of 0.0068 on custom NCM cells. Bao et al.5 proposed a lightweight MLP-based model optimized for memory efficiency, reporting an MAE of 0.0075 on the NASA dataset. Li et al.60 employed neural networks on a proprietary dataset and obtained an RMSE of 0.0110. Pau et al.14 designed TinyML-ready MLP architectures tailored for hardware acceleration, achieving an MAE of 0.0082. Mohanty et al.10 introduced a TimeGAN integrated with BERT for capacity trajectory modeling on the NASA B0018 dataset, reporting an R² of 0.995. Chen et al.13 presented a FPCA-SETCN framework for feature-enhanced temporal modeling, achieving an RMSE of 0.0094 on the NASA B0005 dataset.
Together, these works highlight the effectiveness of hybrid, lightweight, and physics-informed architectures for accurate SoH prediction across diverse datasets and evaluation settings. These findings indicate that combining temporal modeling, spectral decomposition, and memory-enhanced features can significantly improve the robustness of SoH estimation. At the same time, comprehensive reviews and empirical studies emphasize the practical relevance of such approaches in real-world battery management. Reviews of SOC, SoH, and RUL estimation methods provide detailed insights into algorithmic strengths and limitations3,62, while ANN-based health estimation frameworks demonstrate effective deployment in real-world applications such as electric vehicles and energy storage systems60. Collectively, these studies validate the importance of integrating advanced deep learning frameworks for enhancing battery diagnostics and ensuring reliability under diverse operational scenarios.
Motivation and contributions
A consistent benchmark comparison of SoH prediction models using identical preprocessing and evaluation criteria is lacking. This paper develops a unified PyTorch-based pipeline to assess MLP, GRU, TCN, and LSTM on NASA B0005 data.
Key contributions include:
-
Design and implementation of a cycle-based SoH estimation pipeline using normalized discharge capacity.
-
Performance comparison across four deep learning architectures using consistent training splits and metrics.
-
Identification of MLP and TCN as efficient models for real-time BMS applications with R2 > 0.99R2 > 0.99R2 > 0.99.
-
Quantitative analysis of accuracy, training time, and model complexity.
The findings offer practical guidance for selecting deep learning models in battery diagnostics and support integration into advanced BMS platforms.
Table 1 outlines the comparative features of deep learning models used for SoH estimation. LSTM models, referenced in7,18,59, are effective for capturing long-term dependencies due to their gated architecture. GRU models, cited in13,59, offer similar capabilities with reduced parameter count and improved training speed. TCNs, referenced in6,13, utilize dilated causal convolutions for temporal learning, supporting stable gradients over long sequences. MLPs, found in5,14,60, operate on cycle-wise inputs with reduced computational load and fast convergence, making them suitable for embedded systems. Transformer architectures, employed in7,10,17, leverage attention mechanisms to model long-range relationships and temporal variability in battery degradation.
Table 2 presents representative deep learning approaches for SoH estimation. Study9 implemented a hybrid model combining TCN, GRU, and wavelet neural networks, achieving an RMSE of 0.0068 on custom NCM cells. Bao et al.5 applied a memory-efficient MLP-based model to NASA datasets with a reported MAE of 0.0075. Li et al.60 utilized conventional neural networks on a proprietary dataset and reported an RMSE of 0.0110. Pau et al.14 explored MLP models optimized for hardware-accelerated platforms, achieving an MAE of 0.0082. Mohanty et al.10 integrated BERT with TimeGAN for SoH prediction using the B0018 dataset and obtained an R2R^2R2 of 0.995. Chen et al.13 introduced a FPCA-SETCN framework on NASA B0005 data, achieving an RMSE of 0.0094. These studies provide diverse strategies using both conventional and hybrid architectures across different datasets and evaluation metrics.
This paper implements and evaluates four deep learning models as MLP, GRU, TCN, and LSTM under a unified training pipeline using preprocessed NASA B0005 cycle data. The goal is to analyze their predictive accuracy, computational cost, and applicability in real-time battery health diagnostics.
This paper makes the following contributions:
-
A cycle-based SoH estimation pipeline using real discharge data from NASA’s battery degradation dataset.
-
A comprehensive comparison of MLP, GRU, LSTM, and TCN using uniform preprocessing and evaluation metrics.
-
Identification of MLP and TCN as the best-performing models with R2 > 0.99R2 > 0.99R2 > 0.99, highlighting their efficiency in capturing nonlinear degradation.
-
Practical insights into computational overhead, model accuracy, and applicability in real-time battery health diagnostics.
This study provides a foundation for selecting effective deep learning architectures for next-generation BMS and health-aware EV operation.
Methodology
The methodology involves a structured framework for predicting the State of Health (SoH) of lithium-ion batteries using deep learning models trained on cycle-based historical data. The NASA B0005 battery dataset, consisting of 616 recorded cycles, serves as the data source. From these, 168 discharge cycles are selected based on their suitability for capacity-based SoH analysis. Each cycle includes high-resolution time-series data of voltage, current, and temperature measurements93,94. An overview of the proposed methodology is shown in Fig. 1.
Experiments were conducted on B0005, B0006, and B0007 cells from the NASA battery ageing dataset. Each dataset was processed using identical cleaning and capacity-calculation procedures to ensure comparability. Chronological 80:20 splits were used in all cases, with a 10% validation split taken from the training portion for hyperparameter tuning. The test set was not used during model selection, preventing data leakage. Block-wise splits and rolling-window cross-validation confirmed stability of model rankings.
The normalized input features (cycle number) and target values (SoH) were split into training and testing sets using an 80:20 ratio, maintaining chronological order to reflect the natural degradation sequence as mentioned in Fig. 1, step 6. This setup ensured the model was trained on early-stage data and validated on later degradation behavior.
Capacity calculation for SoH
Battery SoH is estimated based on discharge capacity, computed via numerical integration of current over time using the trapezoidal rule. For each cycle i, the capacity \(\:C\_i\) is calculated as:
The SoH is normalized with respect to the initial cycle capacity \(\:C\_0\):
This method ensures consistent and interpretable health values across all cycles.
Data acquisition and preprocessing
The NASA B0005 dataset contains 616 cycles. From these, 168 discharge cycles are filtered using a data cleaning process (see Fig. 2). Each cycle contains time-series data of voltage, current, and temperature. Trend plots are generated for each parameter to visualize degradation behavior. The resulting capacities form the basis for SoH targets.
Computing environment and reproducibility
All experiments were executed on a workstation with an Intel(R) Core(TM) i3-1005G1 CPU @ 1.20 GHz and 8 GB RAM. No discrete GPU acceleration was employed. Models were implemented in PyTorch with CUDA/cuDNN disabled.
The experimental data were taken from the NASA battery aging dataset, specifically the B0005, B0006, and B0007 cell records. Each dataset was processed using identical cleaning, capacity-calculation, and normalization procedures to ensure comparability across cells. The input–output pairs (cycle index and SoH) were split chronologically into an 80:20 ratio for training and testing, preserving the natural degradation progression and simulating realistic prediction scenarios.
The following Python packages and versions were used in the implementation:
-
numpy (v1.26) for numerical operations.
-
scipy (v1.13) for signal integration and MAT file handling.
-
pandas (v2.2) for data manipulation and tabular outputs.
-
matplotlib (v3.9) for visualization.
-
seaborn (v0.13) for statistical plotting.
-
scikit-learn (v1.5) for dataset splitting and evaluation metrics.
-
torch/PyTorch (v2.2) for deep learning model implementation.
All Python scripts, preprocessing steps, and trained models are provided in a public repository along with a runnable notebook to ensure reproducibility95.
Temporal convolutional network (TCN)
The TCN model is designed to handle sequential data through 1D causal convolutions with increasing dilation factors. It comprises multiple TCN blocks, each containing a dilated convolutional layer followed by a ReLU activation function and residual connections to facilitate gradient flow.
A single TCN block with dilation d, kernel size k, and padding p=(k − 1)⋅d performs 1D causal convolution as:
The residual connection is applied as:
TCN uses stacked blocks with increasing dilation d = 1,2,4,… to capture long-term dependencies without the need for recurrence. This design enables the model to maintain computational efficiency while effectively modeling long-range temporal patterns. In this study, two TCN blocks were stacked with dilation rates of 1 and 2, and a final 1D convolutional layer was used to output the predicted SoH values67.
Figure 3 illustrates the architecture of the Temporal Convolutional Network (TCN), which processes sequential cycle data effectively by capturing long-range temporal dependencies through stacked dilated convolutional layers and residual pathways.
Long short-term memory (LSTM)
LSTM networks are a type of Recurrent Neural Network (RNN) capable of learning temporal relationships over long sequences using memory cells and gating mechanisms. In this implementation, the LSTM layer receives the sequence of normalized cycle indices as input and outputs hidden states, which are subsequently passed through a fully connected layer to predict the SoH. The model is trained end-to-end using the mean squared error loss.
LSTM processes the input sequence X=x1,x2,…,xT. using the following internal operations:
where σ is the sigmoid activation function, ⊙ denotes element-wise multiplication, \(\:{h}_{t}\) is the hidden state, and \(\:{c}_{t}\) is the cell state at time t.
The final SoH prediction is obtained as:
Figure 4 shows the architecture of the Long Short-Term Memory (LSTM) network, illustrating its internal gate operations including the input, forget, and output gates. The model relies on memory cells to preserve long-term dependencies essential for accurate SoH prediction.
Gated recurrent unit (GRU)
GRUs are a lightweight alternative to LSTMs that use gating units to control the flow of information without separate memory cells. They are computationally efficient while maintaining the ability to model temporal dependencies. In this study, a single GRU layer was implemented, followed by a dense output layer. The GRU model was trained using the same protocol as the LSTM, enabling fair comparison across architectures96,97.
The GRU operates as follows:
Figure 5 provides a schematic of the Gated Recurrent Unit (GRU) network. Compared to LSTM, the GRU architecture uses fewer gates and no separate memory cell, offering computational efficiency while maintaining the ability to model sequential dependencies.
Multilayer perceptron (MLP)
The MLP model acts as a baseline in this study. It is a fully connected feedforward neural network that treats each input cycle index as an independent instance, ignoring sequence information. The architecture comprises three dense layers with ReLU activations and a final linear output layer. Despite its simplicity, the MLP demonstrated strong performance, validating the predictive power of direct cycle-to-capacity mapping.
The forward pass is defined as:
where \(\:{W}_{1}\) and bi are the learnable weights and biases, and ReLU\(\:\left(x\right)=\text{max}\left(0,x\right)\).
Figure 6 depicts the Multilayer Perceptron (MLP) architecture, consisting of three fully connected layers with ReLU activations. This model treats each cycle as an independent instance and forms a baseline for comparison with temporal architecture.
Model training and evaluation
All models were implemented using the PyTorch framework and trained for 3000 epochs. The Adam optimization algorithm was employed with a learning rate of 0.001. The training process utilized the Mean Squared Error (MSE) as the loss function to minimize prediction error.
Model performance was quantitatively evaluated using three standard metrics:
-
Root Mean Square Error (RMSE):
$$\:\text{RMSE}=\sqrt{\frac{1}{n}{\sum\:}_{i=1}^{n}{\left({y}_{i}-\widehat{{y}_{i}}\right)}^{2}}$$(23) -
Mean Absolute Error (MAE) computes the average of absolute differences:
$$\:\text{MAE}=\frac{1}{n}{\sum\:}_{i=1}^{n}\left|\widehat{{y}_{i}}-{y}_{i}\right|$$(24) -
Coefficient of Determination (R²):
$$\:{R}^{2}=1-\frac{{\sum\:}_{i=1}^{n}{\left(\widehat{y}i-{y}_{i}\right)}^{2}}{\sum\:i={1}^{n}{\left({y}_{i}-\overline{y}\right)}^{2}}$$(25)where \(\:\overline{y}\) is the mean of the actual SoH values.
To support model interpretation, visual diagnostics were employed, including training loss curves, residual distribution histograms, and actual versus predicted plots. Such tools provide detailed insights into the learning behavior and residual trends of each deep learning architecture.
Figures 7 and 8 display the current profiles over time for all cycles and for the last 10 discharge cycles, respectively. These visualizations are used to identify current behavior changes as battery aging progresses.
Figures 9 and 10 represent the voltage variations over time, where the observable decline in voltage amplitude with increasing cycle number reflects capacity degradation. Figures 11 and 12 illustrate the battery temperature trends. Thermal variation correlates with battery aging stages and can reveal underlying degradation mechanisms.
The methodology establishes a cycle-based modeling structure for battery SoH estimation. Capacity values computed from discharge profiles serve as normalized ground truth targets, ensuring uniform learning targets across architectures. The inclusion of both sequential models (TCN, LSTM, GRU) and a non-sequential baseline (MLP) allows for rigorous model benchmarking. Consistent preprocessing, uniform training configurations, and standardized evaluation metrics enable a fair comparative analysis of learning capability and generalization performance. The methodological design supports application in real-world battery health monitoring systems, offering reliable predictive insight across diverse aging profiles.
Hyperparameter tuning and robustness checks
All models were tuned using a structured hyperparameter search restricted to the training partition. The chronological split of 80% training and 20% testing cycles was preserved to reflect prognostic conditions, and the held-out test set was never accessed during optimization or model selection. Within the training data, 10% was allocated as a validation subset for tuning learning rate, number of hidden units, depth of layers, kernel size for TCN, and dropout ratios.
The Adam optimizer with an initial learning rate of 0.001 was selected after grid-based trials across \(\:{\{10}^{-4},{10}^{-3},{10}^{-2}\}.\) Early stopping based on validation loss was applied to prevent overfitting. To further evaluate robustness, two alternative data-splitting strategies were used:
-
Block-wise split: the first 60% of cycles were used for training, the next 20% for validation, and the final 20% for testing.
-
Rolling-window cross-validation: the training horizon was progressively extended and evaluated on subsequent unseen blocks.
Both approaches produced consistent model rankings, with MLP and TCN remaining the top-performing architectures, and RMSE variations within 5% of the original chronological split. Performance values for all models under the three partitioning strategies are reported in Table 3.
Results and discussion
This section presents a detailed analysis of the performance of four deep learning models used for estimating the State of Health (SoH) of lithium-ion batteries based on cycle-wise operational data. The models include Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Temporal Convolutional Network (TCN). Evaluation metrics considered for comparison are Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Coefficient of Determination (R²), and training time in seconds.
Model performance overview
The MLP model produced the most accurate SoH predictions with an RMSE of 0.0069, MAE of 0.0049, and an R² of 0.9955, as summarized in Table 4. The TCN followed closely with RMSE = 0.0071 and R² = 0.9951, demonstrating consistent learning across the cycle range. LSTM achieved a slightly higher RMSE of 0.0076 and R² of 0.9944, while the GRU exhibited the highest error metrics among the models, with RMSE = 0.0160, MAE = 0.0111, and R² = 0.9754, indicating reduced predictive alignment.
The trained architectures were further applied to the B0006 and B0007 datasets. Tables 5 and 6 summarize the RMSE, MAE, and R² values for each model.
Model rankings remain consistent across datasets:
-
On B0005, MLP achieved the best performance.
-
On B0006 and B0007, TCN and LSTM yielded the lowest errors, GRU slightly higher, and MLP ranked lower compared to its performance on B0005.
This indicates that cell-specific ageing patterns can influence architecture suitability and highlights the importance of evaluating models across multiple cells for robust conclusions.
Voltage–time characteristics
Figure 9 illustrates the complete set of voltage–time curves across all discharge cycles in the NASA B0005 dataset. The initial cycles exhibit a relatively stable voltage profile with minimal sag, while later cycles show an increased rate of voltage drop and earlier cut-off due to capacity degradation. The decline in voltage plateau duration across cycles reflects the progressive loss of active lithium-ion intercalation, indicative of aging effects.
Figure 10 focuses on the final ten discharge cycles and highlights the steep voltage decline and shortened discharge duration near end-of-life. These curves reveal a pronounced reduction in energy delivery per cycle and amplified internal resistance effects. The increased curvature and early termination of discharge confirm the critical degradation stage of the battery.
Model prediction accuracy
The prediction output of the TCN model in Fig. 13 aligns closely with the measured SoH values over the complete cycle range, capturing both long-term degradation patterns and localized variations with low deviation. Figure 14 shows that the LSTM network maintains accurate trend tracking through most of the operational range, with small underestimation and overestimation appearing during the high-degradation phase near end-of-life.
The MLP results in Fig. 15 match the ground truth values with the highest precision among all models, producing a stable prediction curve with minimal oscillation. Figure 16 indicates that the GRU network follows the target curve in early and mid-life stages but deviates in later cycles, with a pronounced drop in predictive accuracy during the rapid degradation phase.
For the B0006 dataset, Figs. 17 and 18 present GRU and LSTM predictions, where LSTM demonstrates smoother alignment while GRU exhibits higher residual spread. The MLP and TCN performance for B0006, shown in Figs. 19 and 20, both maintain close agreement with actual values, with MLP achieving slightly tighter curve fitting.
For the B0007 dataset, Figs. 21 and 22 display GRU and LSTM outputs, revealing similar trends as in B0006, with LSTM producing reduced fluctuation in predicted curves. Figures 23 and 24 confirm that MLP and TCN again provide the closest match to measured SoH, with MLP achieving the lowest residual variation.
Prediction consistency: scatter analysis
Figure 25 shows the scatter plot of the LSTM model predictions compared against actual SoH values. The data points exhibit moderate deviation from the ideal diagonal, with a tendency toward underestimation at higher SoH values and increased scatter toward end-of-life cycles. This behavior aligns with the memory dependency and vanishing gradient limitations in long sequences.
Figure 26 presents the scatter plot of the GRU model, where the predicted values show a broader spread around the reference diagonal line. The GRU results indicate reduced precision in mid-life and late-life cycles, reflecting sensitivity to training noise and sequence irregularities during degradation phases.
Figure 27 displays the scatter plot of the MLP model’s predictions versus actual SoH values. The points are densely aligned along the diagonal, showing minimal bias and tight clustering. The model maintains accuracy across the entire degradation span, validating its ability to capture static input–output mappings from cycle-based data.
Figure 28 depicts the scatter distribution of the TCN model. The data points are highly concentrated along with the diagonal with uniform spread and low variance. TCN captures temporal correlations effectively using causal convolutions, yielding robust performance across early, mid, and late battery life. The MLP scatter plot shows strong clustering along the ideal diagonal, confirming minimal prediction error. TCN also reflects a tight distribution. LSTM and GRU scatter plots show wider dispersion.
Training efficiency
Figure 29 shows the training loss curves for MLP, GRU, LSTM, and TCN models. All models reach convergence within 3000 epochs. The MLP demonstrates the fastest and most stable loss reduction, followed closely by TCN, which exhibits similarly smooth convergence behavior. The GRU shows a higher initial loss and slower convergence due to its gating mechanisms and sequential processing overhead. The LSTM follows a similar trend but with slightly reduced computational intensity compared to GRU. These differences in descent characteristics reflect the architectural variations in handling temporal dependencies and parameter update efficiencies.
Residual distribution analysis
Figure 30 presents the residuals across all cycles for MLP, GRU, LSTM, and TCN models. The MLP shows tightly clustered residuals around zero, indicating minimal deviation from actual SoH values across the dataset. TCN exhibits a similarly narrow spread, with consistent low-magnitude residuals across cycles. LSTM produces slightly more variation than MLP and TCN but remains stable across most of the discharge range. The GRU displays the largest fluctuations, particularly in the later cycles, where residuals become increasingly dispersed. This distribution reflects the relative prediction consistency of each model and highlights the architectural impact on cycle-end accuracy.
Cross-model comparison of SoH estimation
Figures 31 and 32 present the comparative performance of the four models on the B0006 and B0007 datasets. In both cases, MLP and TCN predictions align more closely with the actual SoH trajectory, capturing the overall degradation trend with minimal deviation. The LSTM maintains competitive accuracy but introduces slight underestimation and overestimation near end-of-life cycles. The GRU model demonstrates higher error spread, particularly during the later degradation phase, leading to less consistent predictions.
Figure 33 displays the SoH estimation trajectories for MLP, GRU, LSTM, and TCN in a consolidated plot. The predicted curves from MLP and TCN align closely with the actual SoH trend, maintaining consistent overlap across all cycles. The LSTM captures the general degradation pattern but introduces slight underestimations in mid-life regions. The GRU predictions exhibit greater divergence, particularly in the final cycles, where the estimated SoH underperforms relative to the true values.
These results are consistent with the broader benchmarking analysis: MLP achieved the lowest error metrics and fastest training time, followed by TCN, while GRU lagged in both accuracy and efficiency.
Model error metrics
Figure 34 presents the quantitative error metrics, including RMSE and MAE, for each model. The MLP records the lowest values in both categories, with the TCN performing at a comparable level. The LSTM shows moderate error levels, consistent with its mid-range prediction performance. The GRU exhibits the highest RMSE and MAE, corroborating its visible deviations in the SoH prediction plots and wider residual distribution.
Key findings
The MLP model demonstrated superior accuracy, efficiency, and generalization, making it suitable for real-time SoH prediction. TCN provided a balance between accuracy and computational efficiency, while LSTM maintained competitive accuracy with moderate computational cost. The GRU, although capable, underperformed in both accuracy and training time. The visualizations presented in this section substantiate the metrics in Table 4 and provide comprehensive insights into model behavior across operational and predictive dimensions.
Cycle-based State of Health (SoH) estimation was conducted using real operational data from the NASA B0005 battery dataset. Four deep learning models such as Multilayer Perceptron (MLP), Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), and Temporal Convolutional Network (TCN) were trained and evaluated. Among these, the MLP consistently outperformed other architectures, achieving the lowest RMSE of 0.0069, the lowest MAE of 0.0049, and the highest R² value of 0.9955, all within a training time of just 6.59 s.
The TCN model demonstrated comparable accuracy with an RMSE of 0.0071 and R² of 0.9951, though it required nearly three times more training time than the MLP. Both LSTM and GRU showed acceptable predictive performance; however, the GRU’s training time was significantly higher at 150.06 s, and its accuracy declined relative to the other models.
Loss curves for all models confirmed stable convergence over 3000 epochs, indicating adequate learning across architecture. Residual plots showed tight clustering around zero, suggesting minimal prediction bias and effective generalization across cycles. Scatter plots between actual and predicted SoH further supported these findings, especially for MLP and TCN, where predictions closely followed the ideal line of fit.
From a deployment perspective, the MLP’s rapid convergence and low computational overhead make it highly suitable for real-time integration in embedded Battery Management Systems (BMS). While GRU and LSTM offer competitive learning capability, their recurrent nature results in higher computational demands, limiting their practicality in time-constrained or resource-limited applications. TCN, although slower than MLP, balances accuracy and stability effectively, making it a robust candidate for scenarios prioritizing precision and robustness.
Conclusion
This research evaluated the performance of four deep learning models as MLP, GRU, LSTM, and TCN—for estimating the State of Health (SoH) in lithium-ion batteries using cycle-based discharge data from the NASA B0005 dataset. The SoH values were computed through numerical integration of discharge current over time and normalized against the initial capacity to capture degradation across lifecycle stages. The models were trained and tested using PyTorch implementations, and their predictive accuracy was assessed using RMSE, MAE, and R² metrics. Among the tested architectures, the Multilayer Perceptron (MLP) demonstrated the highest accuracy, achieving an RMSE of 0.0069, MAE of 0.0049, and R² of 0.9955. The TCN followed closely, with comparable performance (RMSE = 0.0071, R² = 0.9951). Residual analysis confirmed low bias and tightly clustered errors across models, while loss curves exhibited smooth convergence, reinforcing the stability of the training process. The GRU and LSTM models also achieved acceptable accuracy but incurred significantly higher training times due to their recurrent architecture.
The findings indicate that MLP achieved the best trade-off between predictive accuracy and computational efficiency, making it highly suitable for real-time implementation in embedded Battery Management Systems (BMS). The results validate the capability of deep learning models, particularly MLP and TCN, in capturing nonlinear degradation behavior and enabling accurate SoH tracking across the operational life of lithium-ion batteries.
The study evaluated B0005, B0006, and B0007 cells, which share similar chemistries and were tested under controlled laboratory conditions. Results may vary for other chemistries such as NMC or LFP, under different operating temperatures, or under dynamic drive cycles. In this work, models were trained only on cycle-level capacity features; incorporating voltage, current, and temperature time series may further enhance prediction accuracy.
Future research will emphasize the application of transfer learning techniques to extend model generalization across different lithium-ion chemistries, enabling adaptability beyond the datasets evaluated in this study. Incorporation of multi-temperature datasets will be pursued to capture thermal effects on degradation dynamics, thereby enhancing the robustness of SoH estimation frameworks under varied environmental conditions.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author upon reasonable request.
Abbreviations
- SoH:
-
State of health
- RUL:
-
Remaining useful life
- BMS:
-
Battery management system
- GRU:
-
Gated recurrent unit
- TCN:
-
Temporal convolutional network
- CNN:
-
Convolutional neural network
- FPCA:
-
Functional principal component analysis
- BERT:
-
Bidirectional encoder representations from transformers
- TimeGAN:
-
Time-series generative adversarial network
- MAE:
-
Mean absolute error
- MSE:
-
Mean squared error
- NMC:
-
Nickel manganese cobalt (battery chemistry)
- NCA:
-
Nickel cobalt aluminum (battery chemistry)
- GPU:
-
Graphics processing unit
- cuDNN:
-
CUDA deep neural network library
- SoC:
-
State of charge
- EV:
-
Electric vehicle
- MLP:
-
Multilayer perceptron
- LSTM:
-
Long short-term memory
- ANN:
-
Artificial neural network
- RNN:
-
Recurrent neural network
- SETCN:
-
Spectral-enhanced temporal convolutional network
- GAN:
-
Generative adversarial network
- RMSE:
-
Root mean square error
- R²:
-
Coefficient of determination
- ReLU:
-
Rectified linear unit
- LFP:
-
Lithium iron phosphate (battery chemistry)
- CPU:
-
Central processing unit
- CUDA:
-
Compute unified device architecture
References
Wang, R., Li, J., Wang, X., Wang, S. & Pecht, M. Deep learning model for state of health Estimation of lithium batteries based on relaxation voltage. J. Energy Storage. 79, 110189. https://doi.org/10.1016/j.est.2023.110189 (2024).
Massaoudi, M., Abu-Rub, H. & Ghrayeb, A. Advancing lithium-ion battery health prognostics with deep learning: A review and case study. IEEE Open. J. Ind. Appl. 5, 43–62. https://doi.org/10.1109/OJIA.2024.3354899 (2024).
Dineva Evaluation of advances in battery health prediction for electric vehicles from traditional linear filters to latest machine learning approaches. Batteries 10. https://doi.org/10.3390/batteries10100356 (2024).
Ansari, S. et al. Expert deep learning techniques for remaining useful life prediction of diverse energy storage systems: recent advances, execution features, issues and future outlooks. Expert Syst. Appl. 258, 125163. https://doi.org/10.1016/j.eswa.2024.125163 (2024).
Bao, Z. et al. A lightweight and term-arbitrary memory network for remaining useful life prediction of Li-ion battery. IEEE Trans. Instrum. Meas. 74, 1–13. https://doi.org/10.1109/TIM.2025.3556835 (2025).
Zhou, D. & Wang, B. Battery health prognosis using improved Temporal convolutional network modeling. J. Energy Storage. 51, 104480. https://doi.org/10.1016/j.est.2022.104480 (2022).
Li, H., Zhang, P., Ding, S., Yang & Bai, Y. Deep feature extraction in lifetime prognostics of lithium-ion batteries: Advances, challenges and perspectives. Renew. Sustain. Energy Rev. 184, 113576. https://doi.org/10.1016/j.rser.2023.113576 (2023).
Lee, J. H. & Lee, I. S. Hybrid Estimation method for the state of charge of lithium batteries using a Temporal convolutional network and XGBoost. Batteries 9, 544. https://doi.org/10.3390/batteries9110544 (2023).
Zhang, N., Li, J., Ma, Y. & Wu, K. Lithium-ion batteries state of health Estimation based on optimized TCN–GRU–WNN. Energy Rep. 13, 2502–2515. https://doi.org/10.1016/j.egyr.2025.02.007 (2025).
Mohanty, P. K., Jena, P. & Padhy, N. P. TimeGAN-based diversified synthetic data generation following BERT-based model for EV battery SOC prediction: A state-of-the-art approach. IEEE Trans. Ind. Appl. 61, 4167–4185. https://doi.org/10.1109/TIA.2025.3534165 (2025).
Ji, S., Zhang, Z., Stein, H. S. & Zhu, J. Flexible health prognosis of battery nonlinear aging using Temporal transfer learning. Appl. Energy. 377, 124766. https://doi.org/10.1016/j.apenergy.2024.124766 (2025).
Qi, S. et al. Advanced deep learning techniques for battery thermal management in new energy vehicles. Energies 17, 132. https://doi.org/10.3390/en17164132 (2024).
Chen, J. et al. FPCA–SETCN: A novel deep learning framework for remaining useful life prediction. IEEE Sens. J. 24, 30736–30748. https://doi.org/10.1109/JSEN.2024.3447717 (2024).
Pau, P. & Aniballi, A. Hardware-accelerated tiny machine learning for battery state-of-charge Estimation. Appl. Sci. 14, 6240. https://doi.org/10.3390/app14146240 (2024).
Zhao, F., Guo, Y. & Chen, B. Machine learning-based methods for lithium-ion battery state of charge estimation: A review. World Electr. Veh. J. 15, 131. https://doi.org/10.3390/wevj15040131 (2024).
Zhang, M. et al. Data-driven algorithms for state of health prediction of Li-ion batteries: A review. Energies 16, 3167. https://doi.org/10.3390/en16073167 (2023).
Zhao, S. Y. et al. A novel transformer-embedded lithium-ion battery model for joint Estimation of state-of-charge and state-of-health. Rare Met. 43, 5637–5651 (2024).
Li, X., Yu, D., Byg, V. S. & Ioan, S. D. The development of machine learning-based remaining useful life prediction for lithium-ion batteries. J. Energy Chem. 82, 103–121 (2023).
Cui, Z., Wang, L., Li, Q. & Wang, K. A comprehensive review on the state of charge Estimation for lithium-ion battery based on neural network. Int. J. Energy Res. 46, 5423–5440 (2022).
Xu, B., Ge, X., Ji, S. & Wu, Q. Data-driven RUL prediction for lithium-ion batteries based on multilayer optimized fusion deep network. Ionics 1, 1–17 (2025).
Herle, A. et al. A temporal convolution network approach to state-of-charge estimation in Li-ion batteries. In Proc. IEEE 17th India Council Int. Conf. (INDICON) 1–6 (2020).
Andrioaia, I., Gaitan, V. G., Culea, G. & Banu, I. V. Predicting the RUL of Li-ion batteries in UAVs using machine learning techniques. Computers 13, 64 (2024).
Zhong, I. I., Zhang, D., Xu, P. & Tian, Y. Deep learning in state of charge Estimation for Li-ion battery: A review. Preprints 10, 912 (2022).
Kumar, P. P. et al. Impact of activation functions in deep learning based state of charge estimation for batteries. In Proc. IEEE 4th Int. Conf. Sustainable Energy Future Electric Transportation (SEFET) 1–6 (2024).
Liao, W. et al. Enhanced battery health monitoring in electric vehicles: A novel hybrid HBA-HGBR model. J. Energy Storage. 110, 115316 (2025).
Wang, H. et al. State of charge prediction for lithium-ion batteries based on multi-process scale encoding and adaptive graph Convolution. J. Energy Storage. 113, 115482 (2025).
Weddle, P. J. et al. Battery state-of-health diagnostics during fast cycling using physics-informed deep-learning. J. Power Sources. 585, 233582 (2023).
Zhou, Y. A regression learner-based approach for battery cycling ageing prediction—advances in energy management strategy and techno-economic analysis. Energy 256, 124668 (2022).
Bhadriraju, I., Kwon, J. S. I. & Khan, F. An adaptive data-driven approach for two-timescale dynamics prediction and remaining useful life Estimation of Li-ion batteries. Comput. Chem. Eng. 175, 108275 (2023).
Lelli, A., Musa, E., Batista, D. A., Misul & Belingardi, G. On-road experimental campaign for machine learning based state of health Estimation of high-voltage batteries in electric vehicles. Energies 16, 4639 (2023).
Yang, H. et al. Lithium-ion battery life cycle prediction with deep learning regression model. In Proc. IEEE Appl. Power Electron. Conf. Expo. (APEC) 3346–3351 (2020).
Wang, T., Chao, R., Dong, Z. & Feng, L. Weakly supervised battery SoH Estimation with imprecise intervals. IEEE Trans. Energy Convers. 1, 1 (2025).
Bhatt, A. et al. Machine learning approach to predict the second-life capacity of discarded EV batteries for microgrid applications. In Proc. Int. Conf. Intelligent Computing and Optimization (ICO) 633–646 (Springer, 2021).
Yang, Y. et al. Deep transfer learning enables battery state of charge and state of health Estimation. Energy 294, 130779 (2024).
Qaadan, S. et al. Data on battery health and performance: analysing Samsung INR21700-50E cells with advanced feature engineering. Data Brief. 59, 111346 (2025).
Wang, L., Yang, T. & Hu, B. A battery state of health Estimation method for real-world electric vehicles based on physics-informed neural networks. IEEE Sens. J. 1, 1 (2025).
Hemavathi, S. Lithium-ion battery state of health Estimation using intelligent methods. Frankl. Open. 10, 100237 (2025).
Dineva, A. Advances in lithium-ion battery management through deep learning techniques: A performance analysis of state-of-charge prediction at various load conditions. In Proc. IEEE 17th Int. Symp. Appl. Comput. Intell. Informatics (SACI) 000773–000778 (2023).
Ghodake, A. et al. Random forest regression based temperature estimation in lithium-ion batteries. In Proc. 14th Int. Conf. Comput. Commun. Netw. Technol. (ICCCNT) 1–6 (2023).
Wang, X., Dai, K., Hu, M. & Ni, N. Lithium-ion battery health state and remaining useful life prediction based on hybrid model MFE–GRU–TCA. J. Energy Storage. 95, 112442 (2024).
Zhou, K. Q., Qin, Y. & Yuen, C. Lithium-ion battery state of health Estimation by matrix profile empowered online knee onset identification. IEEE Trans. Transp. Electrif. 10, 1935–1946 (2023).
Liang, W. et al. Extended application of inertial measurement units in biomechanics: from activity recognition to force Estimation. Sensors 23, 4229 (2023).
Hosen, M. S., Youssef, R., Kalogiannis, T., Van Mierlo, J. & Berecibar, M. Battery cycle life study through relaxation and forecasting the lifetime via machine learning. J. Energy Storage. 40, 102726 (2021).
She, I. et al. Battery state-of-health Estimation based on incremental capacity analysis method: synthesizing from cell-level test to real-world application. IEEE J. Emerg. Sel. Top. Power Electron. 11, 214–223 (2021).
Hu, X., Yuan, H., Zou, C., Li, Z. & Zhang, L. Co-estimation of state of charge and state of health for lithium-ion batteries based on fractional-order calculus. IEEE Trans. Veh. Technol. 67, 10319–10329 (2018).
Yang, N., Song, Z., Hofmann, H. & Sun, J. Robust state of health Estimation of lithium-ion batteries using convolutional neural network and random forest. J. Energy Storage. 48, 103857 (2022).
Yan, W. et al. A battery management system with a Lebesgue-sampling-based extended Kalman filter. IEEE Trans. Ind. Electron. 66, 3227–3236 (2018).
Shen, J. et al. Accurate state of health Estimation for lithium-ion batteries under random charging scenarios. Energy 279, 128092 (2023).
Yun, Z. & Qin, W. Remaining useful life Estimation of lithium-ion batteries based on optimal time series health indicator. IEEE Access. 8, 55447–55461 (2020).
Hashemi, S. R., Mahajan, A. M. & Farhad, S. Online Estimation of battery model parameters and state of health in electric and hybrid aircraft application. Energy 229, 120699 (2021).
Zhu, X., Xu, C., Song, T., Huang, Z. & Zhang, Y. Sparse self-attentive transformer with multiscale feature fusion on long-term SoH forecasting. IEEE Trans. Power Electron. 1, 1 (2024).
Jarraya et al. SoH-KLSTM: A hybrid Kolmogorov–Arnold network and LSTM model for enhanced lithium-ion battery health monitoring. J. Energy Storage. 122, 116541. https://doi.org/10.1016/j.est.2025.116541 (2025).
Huang, H., Bian, C., Wu, M., An, D. & Yang, S. A novel integrated SOC–SoH Estimation framework for whole-life-cycle lithium-ion batteries. Energy 288, 129801. https://doi.org/10.1016/j.energy.2023.129801 (2024).
Chen, H. et al. Towards practical data-driven battery state of health estimation: advancements and insights targeting real-world data. J. Energy Chem. 110, 657–680. https://doi.org/10.1016/j.jechem.2025.07.022 (2025).
Wang, Y. et al. A comprehensive review of machine learning-based state of health Estimation for lithium-ion batteries: Data, features, algorithms, and future challenges. Renew. Sustain. Energy Rev. 224, 116125. https://doi.org/10.1016/j.rser.2025.116125 (2025).
Bao, Z. et al. A multiple aging factor interactive learning framework for lithium-ion battery state-of-health Estimation. Eng. Appl. Artif. Intell. 148, 110388. https://doi.org/10.1016/j.engappai.2025.110388 (2025).
Jiang et al. A physics-enhanced online joint Estimation method for SoH and SoC of lithium-ion batteries in eVTOL aircraft applications. J. Energy Storage. 112, 115567. https://doi.org/10.1016/j.est.2025.115567 (2025).
Bhargav, K. B. Remaining Useful Life Prediction of Lithium-Ion Batteries Using Machine Learning.
Annamalai, K. R. et al. Battery’s cell performance with comparative analysis of DeepTCN, TL, LSTM and GRU using synthetic dataset. In Proc. 5th Int. Conf. Smart Electron. Commun. (ICOSEC) 216–220. https://doi.org/10.1109/ICOSEC61587.2024.10722634 (2024).
Li, P. et al. Applying neural network to health Estimation and lifetime prediction of lithium-ion batteries. IEEE Trans. Transp. Electrif. 11, 4224–4248. https://doi.org/10.1109/TTE.2024.3457621 (2025).
Zhang, C. et al. Decoding battery aging in fast-charging electric vehicles: an advanced SoH Estimation framework using real-world field data. Energy Storage Mater. 78, 104236. https://doi.org/10.1016/j.ensm.2025.104236 (2025).
Lipu, M. H. et al. Deep learning enabled state of charge, state of health and remaining useful life Estimation for smart battery management system: Methods, implementations, issues and prospects. J. Energy Storage. 55, 105752. https://doi.org/10.1016/j.est.2022.105752 (2022).
Ma, X. et al. Electric vehicle range prediction considering real-time driving factors and battery capacity index. Transp. Res. D Transp. Environ. 144, 104795. https://doi.org/10.1016/j.trd.2025.104795 (2025).
Yang, Z., Zhang, Y. & Zhang, Y. Prediction of the SoH and cycle life of fast-charging lithium-ion batteries based on a machine learning framework. Futur Batter. 7, 100088. https://doi.org/10.1016/j.fub.2025.100088 (2025).
Geng, M., Su, Y., Liu, C., Chen, L. & Huang, X. Interpretable deep learning with uncertainty quantification for lithium-ion battery SoH Estimation. Energy 138027. https://doi.org/10.1016/j.energy.2025.138027 (2025).
Gui, X. et al. Multi-modal data information alignment based SoH Estimation for lithium-ion batteries using a local–global parallel CNN–Transformer network. J. Energy Storage. 129, 117178. https://doi.org/10.1016/j.est.2025.117178 (2025).
Chen, Y. et al. Exploring life warning solution of lithium-ion batteries in real-world scenarios: TCN–Transformer fusion model for battery pack SoH Estimation. Energy 138053. https://doi.org/10.1016/j.energy.2025.138053 (2025).
Shao, L., Zhang, Y., Zheng, X., Yang, R. & Zhou, W. SoH Estimation of lithium-ion batteries subject to partly missing data: A Kolmogorov–Arnold–Linformer model. Neurocomputing 638, 130181. https://doi.org/10.1016/j.neucom.2025.130181 (2025).
Giazitzis, S. et al. TinyML models for SoH Estimation of lithium-ion batteries based on electrochemical impedance spectroscopy. J. Power Sources. 653, 237568. https://doi.org/10.1016/j.jpowsour.2025.237568 (2025).
Wang, R., Lin, H., Choi, J., Hashemi, A. & Zhu, M. Novel differential voltage features based machine learning approach to lithium-ion batteries SoH prediction at various C-rates. Energy 334, 137651. https://doi.org/10.1016/j.energy.2025.137651 (2025).
Liu, Y., Zhou, B., Pang, T., Fan, G. & Zhang, X. Hybrid fusion for battery degradation diagnostics using minimal real-world data: Bridging laboratory and practical applications. eTransportation 25, 100446. https://doi.org/10.1016/j.etran.2025.100446 (2025).
Li, Y. et al. A hybrid machine learning framework for joint SoC and SoH Estimation of lithium-ion batteries assisted with fiber sensor measurements. Appl. Energy. 325, 119787. https://doi.org/10.1016/j.apenergy.2022.119787 (2022).
Lin, C. et al. Physics-informed machine learning for accurate SoH Estimation of lithium-ion batteries considering various temperatures and operating conditions. Energy 318, 134937. https://doi.org/10.1016/j.energy.2025.134937 (2025).
Jang et al. State of health Estimation of lithium-ion battery cell based on optical thermometry with physics-informed machine learning. Eng. Appl. Artif. Intell. 140, 109704. https://doi.org/10.1016/j.engappai.2024.109704 (2025).
Zheng, M. & Luo, X. Joint Estimation of state of charge (SoC) and state of health (SoH) for lithium ion batteries using support vector machine (SVM), convolutional neural network (CNN) and long short-term memory network (LSTM) models. Int. J. Electrochem. Sci. 19, 100747. https://doi.org/10.1016/j.ijoes.2024.100747 (2024).
Zhao, X., Qu, Y., Li, J., Nan & Burke, A. F. Real-time prediction of battery remaining useful life using hybrid-fusion deep neural networks. Energy 328, 136618. https://doi.org/10.1016/j.energy.2025.136618 (2025).
Pandit, R. & Ahlawat, N. A standardized comparative framework for machine learning techniques in lithium-ion battery state of health Estimation. Futur Batter. 7, 100099. https://doi.org/10.1016/j.fub.2025.100099 (2025).
Tau, R. K., Yahya, A., Mangwala, M. & Ditshego, N. M. XGBoost–Random forest stacking with dual-state Kalman filtering for real-time battery SoC Estimation. Results Eng. 27, 106428. https://doi.org/10.1016/j.rineng.2025.106428 (2025).
Chen, Y. et al. A multi-source domain transfer learning method based on ensemble learning model for lithium-ion batteries SoC Estimation in small sample real vehicle data. Energy 334, 137781. https://doi.org/10.1016/j.energy.2025.137781 (2025).
Yu, X., Ma, Z. & Wen, J. Joint Estimation of SoH and RUL for lithium batteries based on variable frequency and model integration. Int. J. Electrochem. Sci. 19, 100842. https://doi.org/10.1016/j.ijoes.2024.100842 (2024).
Wei, Z. et al. SoH Estimation of lithium-ion batteries based on multi-feature extraction and improved DLEM. J. Energy Storage. 120, 116460. https://doi.org/10.1016/j.est.2025.116460 (2025).
Duan, W. et al. Battery SoH Estimation and RUL prediction framework based on variable forgetting factor online sequential extreme learning machine and particle filter. J. Energy Storage. 65, 107322. https://doi.org/10.1016/j.est.2023.107322 (2023).
Wang, Y., Yu, Y., Ma, Y. & Shi, J. Lithium-ion battery health state Estimation based on improved snow ablation optimization algorithm–deep hybrid kernel extreme learning machine. Energy 323, 135772. https://doi.org/10.1016/j.energy.2025.135772 (2025).
Chahbaz et al. Accelerated impedance-based aging modeling for NCA/Gr–SiOx batteries and the impact of reduced test duration. Cell. Rep. Phys. Sci. 6, 102654. https://doi.org/10.1016/j.xcrp.2025.102654 (2025).
Qin, P. & Zhao, L. Vehicle–cloud collaboration method enables accurate battery state of health Estimation under real-world driving conditions. Energy 334, 137829. https://doi.org/10.1016/j.energy.2025.137829 (2025).
Zhang, Z., Sun, S., Wang, Z. & Lin, N. Battery retirement state prediction method based on real-world data and the TabNet model. Energy 334, 137795. https://doi.org/10.1016/j.energy.2025.137795 (2025).
Yap, J. W. et al. Unveiling real-world aging mechanisms of lithium-ion batteries in electric vehicles. J. Energy Storage. 130, 117420. https://doi.org/10.1016/j.est.2025.117420 (2025).
Luder, I. et al. Big data generation platform for battery faults under real-world variances. Green. Energy Intell. Transp. 4, 100282. https://doi.org/10.1016/j.geits.2025.100282 (2025).
Xia et al. SoH Estimation of lithium-ion batteries with local health indicators in multi-stage fast charging protocols. Energy 334, 137617. https://doi.org/10.1016/j.energy.2025.137617 (2025).
Demirci, O., Taskin, S., Schaltz, E. & Demirci, B. A. Review of battery state Estimation methods for electric vehicles—Part II: SoH Estimation. J. Energy Storage. 96, 112703. https://doi.org/10.1016/j.est.2024.112703 (2024).
Wang, Z., Zhao, L., Li, Y. & Wang, W. A data-efficient method for lithium-ion battery state-of-health Estimation based on real-time frequent itemset image encoding. Appl. Energy. 398, 126416. https://doi.org/10.1016/j.apenergy.2025.126416 (2025).
Lin et al. A lightweight two-stage physics-informed neural network for SoH Estimation of lithium-ion batteries with different chemistries. J. Energy Chem. 105, 261–279. https://doi.org/10.1016/j.jechem.2025.01.057 (2025).
Hu, X., Xu, L., Lin, X. & Pecht, M. Battery lifetime prognostics. Joule 4, 310–346. https://doi.org/10.1016/j.joule.2019.11.018 (2020).
Ji et al. A review on lithium-ion battery modeling from mechanism-based and data-driven perspectives. Processes 12, 1871. https://doi.org/10.3390/pr12091871 (2024).
Bansilal01. Cycle-Based State of Health (SoH) Estimation Using MLP, GRU, TCN, and LSTM, GitHub Repository. https://github.com/Bansilal01/Cycle-Based-State-of-Health-SoH-Estimation-Using-MLP-GRU-TCN-and-LSTM (Accessed 16 August 2025) (2025).
Zhang, K., Yang & Zhang, X. Particle swarm optimization–gated recurrent unit neural network lithium battery state of health Estimation based on feature optimization selection strategy. J. Power Sources. 654, 237798. https://doi.org/10.1016/j.jpowsour.2025.237798 (2025).
He, J., Ma, Z., Liu, Y., Ma, C. & Gao, W. Remaining useful life prediction of lithium-ion battery based on improved gated recurrent unit–generalized cauchy process. J. Energy Storage. 126, 117086. https://doi.org/10.1016/j.est.2025.117086 (2025).
Acknowledgements
The authors would like to acknowledge the support of the Manipal Academy of Higher Education (MAHE) Manipal for paying the Article Processing Charges (APC) of this publication.
Funding
Open access funding provided by Manipal Academy of Higher Education, Manipal. This research was funded by Manipal Academy of Higher Education (MAHE), Manipal.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study, conception, and design. All authors commented on the manuscript.All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Consent for publication
The authors transfer to Springer the publication rights and warrant that the contribution is original.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bairwa, B., Pareek, K. & Jadoun, V.K. Cycle based state of health estimation of lithium ion cells using deep learning architectures. Sci Rep 15, 37078 (2025). https://doi.org/10.1038/s41598-025-20995-7
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-20995-7

































