Abstract
The utilization of mechanical ventilation is of utmost importance in the management of individuals afflicted with severe pulmonary conditions. During periods of a pandemic, it becomes imperative to build ventilators that possess the capability to autonomously adapt parameters over the course of treatment. In order to fulfil this requirement, a research investigation was undertaken with the aim of forecasting the magnitude of pressure applied on the patient by the ventilator. The aforementioned forecast was derived from a comprehensive analysis of many variables, including the ventilator's characteristics and the patient's medical state. This analysis was conducted utilizing a sophisticated computational model referred to as Long Short-Term Memory (LSTM). To enhance the predictive accuracy of the LSTM model, the researchers utilized the Chimp Optimization method (ChoA) method. The integration of LSTM and ChoA led to the development of the LSTM-ChoA model, which successfully tackled the issue of hyperparameter selection for the LSTM model. The experimental results revealed that the LSTM-ChoA model exhibited superior performance compared to alternative optimization algorithms, namely whale grey wolf optimizer (GWO), optimization algorithm (WOA), and particle swarm optimization (PSO). Additionally, the LSTM-ChoA model outperformed regression models, including K-nearest neighbor (KNN) Regressor, Random and Forest (RF) Regressor, and Support Vector Machine (SVM) Regressor, in accurately predicting ventilator pressure. The findings indicate that the suggested predictive model, LSTM-ChoA, demonstrates a reduced mean square error (MSE) value. Specifically, when comparing ChoA with GWO, the MSE fell by around 14.8%. Furthermore, when comparing ChoA with PSO and WOA, the MSE decreased by approximately 60%. Additionally, the analysis of variance (ANOVA) findings revealed that the p-value for the LSTM-ChoA model was 0.000, which is less than the predetermined significance level of 0.05. This indicates that the results of the LSTM-ChoA model are statistically significant.
Similar content being viewed by others
Introduction
Background
Numerous illnesses call for a ventilator in medicine and medical transportation systems. The poliomyelitis pandemic in the 1950s provided conclusive evidence of a mechanical ventilator's effectiveness1,2,3. The ventilator can assist in moving air into and out of the lungs if a person has respiratory failure, hypoxemia, or hypercapnia and cannot breathe properly4,5. Furthermore, if someone has surgery under general anesthesia, they will also need a ventilator for adequate breathing. However, because mechanical ventilators are time-consuming, expensive, and ineffective, it is particularly challenging to have them available for all patients during a pandemic. Machine learning, on the other hand, can support autonomous pressure selection and prediction6. Many traditional mechanical ventilator prediction techniques have been introduced in recent years. Nonetheless, they should have paid more attention to the costly newly developed technologies and correct pressure because a doctor manually adjusted them7.
Combining derivative, integral, and proportional controls is the industry standard for proportional-integral-derivative (PID) controllers. In this fusion, the discrepancy from the desired waveform is employed as input, leading to adjustments in the pressure chamber to rectify the output waveform and minimize the margin of deviation. Pressure inaccuracies emerge from this operating design, which calls for continual human surveillance and intervention by medical experts. This method complies with the required requirements and safety standards; however, it lacks the adaptability and quickness needed for a variety of clinical scenarios. Therefore, this problem might be solved with a dynamic controller that can continuously adjust pressure settings. This is the precise situation where replacing the traditional PID controller algorithm with machine learning comes into play.
Due to the recent COVID-19 pandemic, a considerable array of open-source ventilator designs has emerged. This situation has paved the way for numerous endeavors to enhance these ventilators using advanced techniques and systems. These efforts involve incorporating decision support methods reliant on machine learning models that are finely tuned to incorporate patient responses. The overarching aim is to enhance the accuracy of pressure values.
There have been more and more studies on the forecasting of ventilator pressure using LSTM networks because it has been shown that the LSTM, as a type of Recurrent Neural Network (RNN)8,9, is suited for evaluating long-series data. The relevant hyperparameters must be changed to build an LSTM network model that satisfies the specifications10, and researchers frequently choose these values based on past knowledge and experience11. It could be necessary to adjust the pertinent parameters for specific problems manually12,13.
Related works and problem statement
Due to the increasing complexity and multidimensionality of almost every aspect of the real world, research into optimization problems remains robust. The optimization process involves determining the optimal configuration of variables based on the objective function of the optimization problem. However, traditional optimization methods sometimes struggle to provide timely solutions. Therefore, a many of nature-inspired metaheuristic algorithms have been developed to tackle challenging problems. These include genetic algorithm (GA)14, simulated annealing algorithm (SA)15, particle swarm optimization (PSO)16, ant colony optimization (ACO)17, artificial bee colony (ABC)17, differential evolution (DE)18, harmony search (HS)19, firefly algorithm (FA)20, bat algorithm (BA)21, flower pollination algorithm (FPA)22, dragonfly algorithm (DA)23, grey wolf optimizer (GWO)24, moth-flame optimization algorithm (MFO)25, earthworm optimization algorithm (EWA)26, elephant herding optimization (EHO)27, brain storm optimization algorithm (BSO)28, fireworks algorithm (FWA)29, moth search (MS)30, squirrel search algorithm (SSA)31, salp swarm algorithm (SSA)32 and whale optimization algorithm (WOA)25. Numerous studies have helped to advance the LSTM network optimization techniques in recent years. Table 1 provides information about the models that were used as well as a list of current related research.
Every optimization technique comes with its own set of advantages and disadvantages. The selection of an approach hinges on the particular demands of the task, the network's structure, and the computational resources at hand. It is often necessary to experiment and fine-tune these techniques to achieve optimal results for a given LSTM-based application.
-
1.
Gradient clipping and weight initialization:
-
Benefits: Gradient clipping helps address the vanishing/exploding gradient problem by imposing a threshold on the gradients during training. It prevents gradients from becoming too large or too small, leading to more stable and effective training. Proper weight initialization techniques, such as Glorot and He initialization, ensure that the weights are initialized in a way that promotes efficient gradient flow and convergence.
-
Drawbacks: Gradient clipping can potentially introduce biases to the gradient updates and may require careful tuning of the threshold value. Weight initialization techniques may not always guarantee the optimal initialization for all network architectures or tasks, and finding the appropriate initialization scheme can still be a trial-and-error process.
-
-
2.
Non-saturating activation functions and gradient clipping:
-
Benefits: Non-saturating activation functions, such as ReLU (Rectified Linear Unit), help mitigate the vanishing gradient problem by avoiding saturation in the activation values. They facilitate the flow of gradients and enable better learning in deep architectures. Gradient clipping, as mentioned earlier, prevents gradients from becoming too large, ensuring more stable training.
-
Drawbacks: Activation functions such as ReLU, which do not saturate, can encounter the issue of the "dying ReLU" phenomenon. This occurs when certain neurons become inactive and fail to recuperate throughout the training process. This issue can hinder the learning process and network performance. Gradient clipping, if applied too aggressively, can result in underutilization of the gradient information and slow convergence.
-
-
3.
Adam optimization:
-
Benefits: Adam combines adaptive learning rates and momentum to efficiently optimize LSTM networks. By adjusting the learning rate on a per-parameter basis, it achieves enhanced optimization performance and quicker convergence. Adam also maintains separate learning rates for each parameter, making it suitable for dealing with sparse gradients and noisy data.
-
Drawbacks: Adam has several hyperparameters that require careful tuning, such as the learning rate, momentum parameters, and exponential decay rates. Poor hyperparameter selection can lead to suboptimal results or difficulties in convergence. Additionally, Adam may not always generalize well to all types of optimization problems and could benefit from modifications in certain scenarios.
-
-
4.
Scheduled sampling:
-
Benefits: Scheduled sampling addresses the discrepancy between training and inference by gradually introducing model predictions during training. It allows the model to adapt to the errors made by its own predictions, leading to improved performance during inference. This technique is particularly useful in sequence prediction tasks, where the model needs to generate accurate outputs based on its own predictions.
-
Drawbacks: Scheduled sampling introduces a discrepancy between the training and inference processes, which can make the optimization task more challenging. Determining the appropriate schedule for introducing model predictions requires careful consideration and may vary depending on the specific task and dataset.
-
-
5.
Layer normalization:
-
Benefits: Layer normalization addresses the internal covariate shift problem by normalizing the inputs to each layer. It helps stabilize the optimization process, accelerates training, and allows for faster convergence. Layer normalization also provides better generalization and performance, particularly in deep networks.
-
Drawbacks: Layer normalization introduces additional computational overhead, as it requires calculating the mean and variance of each layer's inputs. This overhead can make training slower compared to other normalization techniques. Additionally, the benefits of layer normalization may vary depending on the specific architecture and task, and it might not be universally applicable in all cases.
-
-
6.
Residual learning:
-
Benefits: Residual learning, achieved through skip connections, allows for training very deep networks, including LSTMs. It mitigates the vanishing/exploding gradient problem by providing shortcut connections that facilitate gradient flow. Residual connections enable the training of deeper LSTM architectures, leading to improved accuracy and better utilization of network depth.
-
Drawbacks: The main challenge with residual learning is designing appropriate skip connections and ensuring that the information flow through the network is optimized. Poorly designed skip connections or overly deep architectures can still suffer from optimization difficulties. Additionally, residual learning may require careful tuning of hyperparameters and architectural decisions.
-
Motivations, contributions, and article organization
In order to address these challenges in our study, the parameters of the LSTM model are fine-tuned using the ChoA algorithm, resulting in the development of the LSTM-ChoA prediction model. The experimental findings demonstrate that this optimized model effectively predicts ventilator pressure for patients, enabling mechanical ventilators to provide tailored patient care. The key contributions of this research can be summarized as follows:
-
(1)
The ventilator dataset was explored, analyzed, and added new features.
-
(2)
The ChoA algorithm optimized the parameters of the LSTM, and the ChoA -LSTM forecasting model was subsequently constructed.
-
(3)
ChoA algorithm's efficiency was compared with other metaheuristic optimization algorithms.
-
(4)
The prediction efficiency of LSTM-ChoA was compared with other regressor models.
This research article comprises several sections, namely: related works, inspiration, study objectives, theoretical framework and approach, experimentation, experimental results, and conclusion. The theoretical and methodological aspects encompass two key components: an explanation of the LSTM model theory, the ChoA algorithm, and a concise overview of our proposed methodology.
Materials and methods
Materials
The ventilator data used in this work was obtained from the Google Brain public dataset, which is publicly accessible on Kaggle42. An artificial bellows test lung was attached to a custom open-source ventilator using a respiratory circuit to create the dataset. Figure 1 shows the configuration, with the predicted state variable (airway pressure) indicated in blue and the two control inputs highlighted in green. The inspiratory solenoid valve's percentage of opening, which allows air into the lung, is represented by the first control input. It has a scale of 0–100, with 0 signifying total closure (no air allowed in) and 100 signifying complete opening. A binary variable showing whether the exploratory valve is open (1) or closed (0), allowing air to be discharged, serves as the second control input. The goal of positive end-expiratory pressure (PEEP) is to keep the lungs from collapsing43.
Each time series corresponds to a breath that lasts about three seconds. About 75,000 for training and 50,000 test breaths comprise the dataset's five features. Information on significant features and intended columns is compiled in Table 2.
Among the observed values, R equal to 20 and C were found to be the most common in patients with severe lung disease10. The extent of openness of the inspiratory solenoid valve (ISV) is quantified as a continuous parameter spanning from 0 to 100. Here, 0 signifies a fully closed position, while 100 corresponds to complete openness. Alternatively, the exploratory valve status (ESV) serves as a binary variable denoting whether the valve is in the open state (1) or the closed state (0), facilitating air release.
Preliminary knowledge
Long short-term memory (LSTM)
The construction of the LSTM, a particular kind of RNN44,45, is depicted in Fig. 2. Its hidden layer comprises one or more memory cells, each with an input, output, and forget gate46,47.
The forget gate (\({f}_{t}\)): is responsible for deciding whether to retain the long-term state \(c\) and is computed using the current time step t input \({x}_{t}\) and the output \({h}_{t-1}\) from the preceding time step \(t-1\). The corresponding formula for this operation is:
The matrix \({W}_{f}\) corresponds to the weights linked with the forget gate, while \({B}_{f}\) denotes the biased term. The sigmoid function \(\sigma (\cdot )\) is applied to the combined inputs to compute the forget gate's decision.
The input gate's is responsible for generating a new candidate cell state \(\tilde{c}\), performing relevant computations, and controlling the amount of information to be added. The calculation formula for the input gate is as follows:
In the given equation, the current input cell state represented by 'c', while \({W}_{c}\) corresponds to the weight matrix linked to the cell state, and \({B}_{c}\) signifies the bias term associated with the cell state. Similarly, \({i}_{t}\) represents the output generated by the input gate, \({W}_{i}\) is indicative of the weight matrix for the input gate, and \({B}_{i}\) stands for the bias term pertaining to the input gate. The matrix denoted as \(\left[{h}_{t-1},{x}_{t}\right]\) encompasses two vectors: \({x}_{t}\), which signifies the input at the present time step (t), and \({h}_{t-1}\), representing the output from the prior time step (t-1). The sigmoid activation function is denoted by \(\sigma (\cdot )\), while the hyperbolic tangent function is represented as \(\mathrm{tanh}\left(\cdot \right)\).
Output gate: produces the output \({f}_{t}\) using a sigmoid function \(\sigma (\cdot)\) that takes the input \({x}_{t}\) at the current time step (\(t\) t) and the output \({h}_{t-1}\) from the previous time step (\(t-1\)) as inputs. The calculation can be expressed as follows:
\({B}_{o}\) represents the output gate's bias term, and \({W}_{o}\) is the weight of the output gates.
The LSTM structure, with its unique three-gate architecture and hidden state with memory capabilities, effectively addresses the issue of long-term dependencies by capturing long-term historical information. The hidden state undergoes a series of operations. Firstly, the forget gate \({C}_{t}\) at the current time step (t) regulates which information from the previous hidden state \({C}_{t-1}\) at time step \(t-1\) needs to be discarded and which information should be retained. Secondly, the structure of the hidden state selectively discards certain data while utilizing the input gate to incorporate new information in conjunction with the forget gate. Finally, the cell state \({C}_{t}\) is updated through a sequence of computations. The LSTM employs the output gate, the cell state \({C}_{t}\), and a tanh layer to calculate the final output value \({h}_{t}\).
The choice of the Long Short-Term Memory (LSTM) model for predicting ventilator pressure in this study likely stems from several advantages that make LSTMs well-suited for sequential data prediction. Here are some reasons why the authors might have chosen LSTM and its advantages48,49,50:
-
1.
LSTMs are a type of recurrent neural network (RNN) designed to handle sequential data. Ventilator data is sequential in nature, as it consists of time-series measurements. LSTMs can capture dependencies over time, making them suitable for modeling such data.
-
2.
LSTMs are designed to overcome the vanishing gradient problem, allowing them to capture long-term dependencies in data. In the context of ventilator data, where previous time steps can significantly impact future pressure values, LSTMs are capable of capturing these long-range dependencies.
-
3.
LSTMs have an internal memory mechanism that allows them to retain information over long sequences. This is particularly useful in scenarios where previous states, such as lung capacity, play a crucial role in predicting future states, like airway pressure.
-
4.
LSTMs are highly adaptable and can handle different input and output formats. This flexibility makes them suitable for a variety of machine learning tasks, including time series forecasting.
-
5.
LSTMs can automatically learn relevant features from the data, reducing the need for extensive manual feature engineering. This can be advantageous when working with complex datasets like ventilator pressure data.
-
6.
LSTMs can handle noisy data and are less sensitive to minor variations in the input data, which is beneficial when working with real-world data that may contain noise or measurement errors.
-
7.
LSTMs are capable of capturing complex interactions between different features in the data. In the context of ventilator data, where various factors, such as the inspiratory solenoid valve and exploratory valve, can interact to affect airway pressure, this feature is valuable.
-
8.
LSTMs can be effectively used for regression tasks, making them suitable for predicting numerical values like airway pressure.
It is important to note that while LSTMs offer these advantages, the model's success depends on factors such as data quality, hyperparameters' choice, and sufficient training data availability. The authors chose LSTM because of its ability to model sequential data effectively, suitability for time series forecasting tasks, and robustness in handling ventilator pressure data's complex and noisy nature.
Chimp optimization algorithm (ChoA)
This model draws inspiration from chimpanzee intelligence and breeding behavior in cluster hunting51,52,53. Figure 3 depicts attack, chase, barrier, and driver as the four methods used to replicate the attitude. Equation (7) represents the distance \((D)\) between the chimp and its prey. Equation (8) represents the chimpanzee’s position update formula; \({\alpha }_{\text{prey}}\) is a vector of prey positions; and \({\alpha }_{\text{chimp}}\) is a position vector for chimpanzees54.
Where the co-efficient vectors are \(m,c\), and \(d\), these elements can be justified by (9) through (11).
where \(l\) is a constant that drops along a line from 2.5 to 0 throughout the iterations55, \({r}_{1}\) and \({r}_{2}\) are random values between 0 and 1, and m is a disordered vector. The attacker, barrier, chaser, and driver are the best outputs with optimal capabilities for mathematically simulating this system56,57. \(C\) is a random variable that influences the position of prey within [0, 2] on the individual position of chimps (when \(C\) < 1, the degree of influence weakens; when \(C\) > 1, the degree of influence strengthens).
The positions of other chimps in the population are determined by the positions (\(d\)) of Attacker, Barrier, Chaser, and Driver, and the position update Eqs. (12), (13), (14), and (15)58.
Where \(\alpha\) is a positions vector of the four chimp. Following this, the chimpanzees' next point (\({x}_{1}, {x}_{2,}{x}_{3} and {x}_{4}\))is reorganized using (16) through (19):
The positions are upgraded by using Eq. (20):
Then, Eq. (21) is implemented once the positions have been upgraded.
The motivation for using the Chimp Optimization Algorithm (ChoA) to optimize the feature space in our study can be explained as follows:
-
1.
The feature space in machine learning models is often non-convex and high-dimensional, making it challenging to find the optimal combination of features manually. ChoA is designed to handle non-convex optimization problems, making it well-suited for feature selection and hyperparameter optimization.
-
2.
ChoA can efficiently identify relevant features in the dataset, which is crucial for improving model performance. By selecting the most informative features, we aim to enhance the accuracy of our airway pressure prediction model while reducing the dimensionality of the input data.
-
3.
ChoA is not limited to feature selection but can also optimize hyperparameters of machine learning models. In our study, ChoA is employed to fine-tune the hyperparameters of the Long Short-Term Memory (LSTM) model, which plays a crucial role in predicting airway pressure accurately.
-
4.
The primary motivation behind using ChoA is to enhance the overall performance of our predictive model. By optimizing both the feature space and model hyperparameters, we aim to achieve superior accuracy and predictive capabilities in comparison to other optimization techniques.
-
5.
The utilization of ChoA in the context of feature selection and hyperparameter optimization is relatively novel. By applying this innovative optimization approach, we intend to contribute to the scientific knowledge in the field of machine learning and healthcare by showcasing the potential benefits of ChoA in enhancing predictive models.
In summary, the motivation for using ChoA lies in its ability to address the challenges posed by complex feature spaces, efficiently select the most relevant features, fine-tune model hyperparameters, and ultimately improve the performance of our airway pressure prediction model. By employing ChoA, we aim to demonstrate its effectiveness and showcase its potential contributions to the field of healthcare and predictive modeling.
Methodology
As shown in Figure 4, our proposed approach for ventilator pressure prediction includes five steps. The study encompassed several stages, starting with Exploratory Data Analysis (EDA) and progressing through feature extraction, the development of a regression model, an assessment of the model's performance, and a subsequent comparison of the outcomes achieved by the proposed regression model with those of benchmark models. Each phase is described in detail below.
Exploratory data analysis
This process enabled us to extract additional meaningful insights from the ventilator dataset,36 uncover distinct patterns within the data, and identify the most suitable models that utilize the available features to achieve faster convergence61 and minimize prediction errors62.
The exploratory Data Analysis involves studying the correlation between dataset variables and airway pressure patterns during the inhalation and exhalation phases. Finally, analysis of the Effects R–C pairs have on pressure distribution. Our study uses Spearman's rank correlation (\(\rho\)) to assess the correlation between dataset features as follows63:
where \(n\) is the number of observations and \({d}_{i}\) is the difference between each observation.
Extracted features
Based on previous analysis, It was decided to create and extract four features that improve the model predictions and converge the model faster64. The following features were extracted:
-
(1)
Lung setting (δ) = R*C: It indicates the change in volume per change in flow
-
(2)
CISV is defined as the cumulative sum of the control input for the inspiratory solenoid valve.
-
(3)
Integral is the difference between consecutive time steps.
-
(4)
Differential = indicates the difference of consecutive samples.
Development regression algorithm
The process of selecting a set of hyperparameters that yield optimal performance for the Long Short-Term Memory (LSTM) model on the given data is referred to as Hyperparameter Optimization (HPO). In this work, the Chimp Optimization Algorithm (ChoA) is employed to determine the best values for the LSTM Model Hyperparameters. These hyperparameters include the Number of Iterations (NI), Learning Rate (LR), Hidden Units (NHU), and Number of Layers (NL). Figure 5 portrays the flow chart that presents the LSTM forecasting model known as the LSTM-ChoA model. The process encompasses distinct steps outlined below:
Step (1): Initialize LSTM {E, LR, NHU, NL} parameters and the ChoA algorithm parameters that include the chimp population \(\left\{{r}_{1},{r}_{2},l\right\}\) and the constants \(\left\{a,\mathrm{c},m\right\}\).
Step (2): The mean square error gained from training the LSTM network was used to compute the fitness function, which corresponds to the cost function value. This fitness value was continuously updated in real-time as the Chimp optimization algorithm progressed. Within the range of iterations, Eq. (9) was utilized to determine the position of each chimp, and subsequently, the chimps were randomly divided into independent groups.
The cost function is Mean Squared Error (MSE), which distinguishes between real and predicted data. The MSE function must be reduced during LSTM training by updating hyperparameter values. The MSE is presented in Eq. (23).
Step (4): Assign the best chimp in each group using Eqs. (12), (13), (14), and (15), respectively.
Step (3): If the current new position was better, the old position was updated using Eq. (20).
Step (5): The LSTM network was trained based on the optimal parameter combination, and testing samples were utilized for forecasting.
Benchmark models
The evaluation of ChoA's performance involved a comparison with several benchmark optimization methods, such as whale optimization algorithm (WOA), grey wolf optimizer (GWO)65, and particle swarm optimization (PSO)66. The initial parameter values for each optimizer can be found in Table 3.
Furthermore, the LSTM- ChoA algorithm has been compared with regression models as baselines: Random forest (RF) Regressor, Support Vector Machine (SVM) Regressor, and K-nearest neighbor (KNN) Regressor. The optimal Hyper-parameter for baseline models was selected using grid search in sklearn Libraries59,60. In Table 4, the search space for the hyperparameters of the regressor models is presented, along with the selected hyperparameters obtained through grid search.
Experimental simulation configuration
The methodology introduced was put into practice using Jupyter Notebook version 6.5.467 along with Python version 3.6.768. The execution took place on a personal computer equipped with an Intel Core i5 processor, 4 GB of RAM, and the Windows 10 Professional 64-bit operating system69.
Results and discussion
This section introduces Exploratory Data Analysis for the ventilator dataset we used and the LSTM- ChoA model results. A complete model evaluation and comparison are presented in “Experimental simulation configuration”.
Figure 6 displays the correlation matrix, illustrating the correlation coefficients among the different elements of interest. A value of -1 denotes a high negative correlation, while a value of 1 denotes a strong positive correlation, and the matrix determines the linear correlation between variables. The diagonal values represent the dependence of each variable on itself, also known as autocorrelation.
The correlation matrix of features indicates that there is a relatively stronger correlation between airway pressure and both features (ISV and Time step) compared to the other variables. It can be explained through the principle of the ventilator. When the value of ISV changes, the air pressure will change in the patient's bronchi, corresponding to the value of the feature "Pressure" in the dataset.
Besides airway pressure, property C also has a degree of dependency significantly with ISV. When we increase or decrease the value of the inspiratory electronic valve, the degree of lung expansion will also change (varying with the volume of air transmitted into the lungs). The remaining features have a low correlation with each other.
Figure 7 shows the pressure magnitude distributions when the expiratory valve is closed and open. From this figure, we can see that when the expiratory valve is open, the pressure distribution is left-skewed, meaning that the airway pressure lies in the lower ranges in such a case. On the other hand, when the valve is closed (referred to as the inhalation phase), the pressure lies in a wider range, which is expected as pressure variations occur during inhalation. Furthermore, notice the abrupt peaks in the distribution when the expiratory valve is closed (during inhalation), which produces small peaks in the overall distribution. The reason for these small peaks represents a slight EIV valve leak.
To illustrate how the airway pressure changes within a complete breath cycle. Figure 8 shows the airway pressure and input valve positions (opened/closed) for four different breaths. Each breath represents 80 recorded time steps.
During a breath cycle, when the expiratory valve is closed (inhalation), the pressure gradually increases as the inspiratory valve open percentage increases. Interesting to note, however, is that there is a certain ‘delay time’ between the change of valve position (control variable) and the pressure (response). Once the pressure reaches the peak value, the expiratory valve is opened as the exhalation phase begins. The pressure decreases rapidly until it reaches an asymptote. This cycle repeats continuously. It understands from this that we realize that pressure at consecutive time steps bears strong correlations with each other, meaning that sequential treatment of the data can prove advantageous.
Figure 9 shows the effects of R–C pairs on airway pressure distribution. It has nine combinations of R and C. It has nine varieties of experiments; C can be 10, 20, or 50, and R can be 5, 20, or 50. It is observed that, with different R–C pairs, the pressure distribution varies for the output valve closed and open. Therefore the adding R–C pair as a new feature in the training dataset may have a positive impact on the LSTM- ChoA model's ability to predict ventilator pressure.
This study has extracted four features (δ, CISV, Integral, and Differential) to improve the model predictions. Figure 10 shows the airway pressure distribution based on extracted features and nine R-C combinations.
It is observed in Fig. 10a–c that the air pressure variation will appear and certain distinct clusters are formed. At the same time, in Fig. 10d, the differential feature fails to discriminate our dataset into certain clusters. Therefore, the features (δ, CISV, and Integral) will be preserved while will Differential feature will be neglected.
For the prediction of airway pressure, two experiments were conducted in this study; in the first experiment, four metaheuristic optimization models (PSO, GWO, WOA, and ChoA) were applied to find an efficient LSTM hyperparameter (NI, LR, NHU, and NL) can better improve the model accuracy. In the second experiment, the prediction efficiency of LSTM-ChoA was compared with other regressor models. In two experiments, the training set accounts for 80% of the entire ventilator dataset and is used to establish regressor models. The remaining 20% is the test set used to measure the selected hyperparameters' performance.
Table 5 showcases the parameter values associated with LSTM and the corresponding computed errors derived from the initial experiment. The outcomes clearly illustrate that optimizing hyperparameters using ChoA can substantially enhance the regression performance of the LSTM model. This improvement is generally evident. ChoA outperformed Another Comparative optimization techniques PSO, GWO, and WOA, with the lowest MSE value (0.0852). The LSTM-PSO and LSTM-WOA have a similar value of MSE.
Furthermore, the overall time required (referred to as computational time or CT) to accomplish a hyperparameter optimization procedure using fivefold cross-validation is employed as a metric for assessing model efficiency. It has been observed that the computational time of ChoA is generally lengthier compared to other optimization methods, whereas WOA consistently achieves the shortest time.
Figure 11 depict the convergence curves of the LSTM model employing four distinct optimization algorithms. These figures reveal that the utilization of ChoA resulted in superior performance for the LSTM model. Specifically, the LSTM-ChoA model demonstrated better outcomes compared to other regression models, exhibiting faster and deeper convergence towards the optimal solution. Conversely, the LSTM-GWO approach also exhibited competitive performance in this study.
Considering that the regression outcome of each model is the average mean squared error (MSE) derived from 10 independent iterations, a logical step was to employ ANOVA test at a 95% confidence level. This test aimed to determine whether the classification performance achieved by the proposed LSTM-ChoA method exhibited a noteworthy superiority over other approaches. In this statistical evaluation, the null hypothesis suggested that the classification performance of distinct methods was comparable. Should the calculated p-value fall below 0.05, it would signify the rejection of the null hypothesis, indicating a significant divergence in classification performance between the methods being compared.
The outcomes of the ANOVA, along with corresponding p-values, are presented in Table 6, with LSTM-ChoA used as the benchmark method. It is evident from the results that the regression performance of LSTM-ChoA displayed a notable improvement (p-value < 0.05) across the 10 independent iterations.
In order to demonstrate the superior performance of the LSTM-ChoA prediction model, it was compared to other regression models such as RF, SVM, and KNN in the second experiment. Table 7 presents a summary of the results obtained from these predictive models in terms of mean squared error (MSE). As mentioned earlier, the hyperparameters for these models were determined using a grid search. It is evident from the table that the LSTM-ChoA model outperforms the other regression models by achieving the lowest MSE, highlighting its effectiveness in making accurate predictions.
Figure 12 displays a contrast between the factual values and the projected values for airway pressure. The figure unmistakably reveals that the prediction line generated by the RF model notably diverges from the actual pressure line, underscoring its subpar predictive performance. On the other hand, the prediction lines produced by the LSTM-ChoA model closely align with the actual lines, indicating its ability to accurately capture the underlying patterns and trends in the data.
Finally, our proposed prediction model LSTM-ChoA is compared with comparative studies1,4,70. They were chosen because these studies used the same dataset with different prediction techniques. As evidenced by Table 8. The Improved Chimp Optimization Algorithm has better optimization results compared to the prediction results of the other contemporary works for predicting ventilator pressure. This further demonstrates the feasibility and effectiveness of the Improved Chimp Optimization Algorithm.
The LSTM model built on ChoA provides a lot of benefits. The pressure ventilator prediction forecasting accuracy significantly increased as compared to the other models. You can sum up the LSTM-ChoA model as follows:
-
(1)
The Chimp Optimization Algorithm played a vital role in optimizing the hyperparameters of the LSTM model, leading to a substantial enhancement in forecasting accuracy. This improvement is evident from the results presented in Table 4. The MSE has decreased by about 14.8% when using ChoA compared with GWO, while The MSE has decreased by about 60% ChoA compared with PSO and WOA.
-
(2)
The results in Fig. 11 have shown that both the LSTM-ChoA algorithms have the best convergence curve followed by LSTM-GWO and LSTM-PSO, LSTM-WOA at the end.
-
(3)
The ANOVA results with p-values in Table 5 show that the regression performance of LSTM-ChoA was significantly better (p-value < 0.05) on the ten independent runs.
-
(4)
The assessment of the proposed algorithm encompassed various benchmark methods including RF, SVM, and KNN. It becomes evident upon reviewing Table 6 and Fig. 12 that the LSTM-ChoA model surpasses alternative regression models, attaining the lowest mean squared error (MSE).
Study limitation
In this study, while we have made significant progress in predicting airway pressure in ventilator respiratory circuits, it's important to acknowledge certain limitations:
-
1.
Our study heavily relies on the ventilator pressure prediction dataset obtained from the Google Brain public dataset on Kaggle. The limitations of this dataset, such as its size, diversity, and representativeness of real-world clinical scenarios, could impact the generalizability of our findings to broader healthcare settings.
-
2.
While ChoA helps optimize the hyperparameters of the LSTM model, the choice of hyperparameters and the tuning process is not exhaustively discussed. A more comprehensive exploration of hyperparameter sensitivity could further enhance our model's performance.
-
3.
Although we mention feature extraction, we do not delve into the preprocessing steps in great detail. High-quality data preprocessing is crucial in building robust predictive models, and its impact on our model's performance deserves more attention.
-
4.
Our study does not extensively discuss the generalizability of the model to other clinical settings or diverse patient populations. This raises questions about the external validity of our findings.
-
5.
Our study does not cover whether the model's predictions have been validated in actual clinical practice. The practical applicability and safety of the model in real-world healthcare scenarios remain a subject of further investigation.
These limitations highlight areas where future research can make advancements, and they are essential for providing a more comprehensive and robust solution for airway pressure prediction in ventilator respiratory circuits.
Conclusion
In this research paper, we made use of the ventilator pressure prediction dataset sourced from Google Brain on Kaggle. This dataset originates from an open-source ventilation system created by researchers at Princeton Lab. Our proposed approach for predicting airway pressure in ventilator respiratory circuits involves a comprehensive analysis. This entails exploratory data analysis, feature extraction, regression model creation, performance analysis, and comparison of the outcomes with reference models.
We discovered a significant relationship between airway pressure and parameters like ISV and Time step during the exploratory data analysis. The distribution of feature C also showed distinct peaks when the expiratory valve was closed, indicating a minor leak in the EIV valve, and feature C showed a considerable dependence on ISV. We retrieved four features (, CISV, Integral, and Differential) to improve the prediction model. This investigation strengthens the case for sequential data treatment's efficacy.
The LSTM Model Hyperparameters (NI, LR, NHU, and NL) were optimized using the Chimp Optimization Algorithm (ChoA). ChoA's effectiveness was assessed and contrasted with that of other benchmark optimization algorithms like PSO, GWO, and WOA. The MSE has decreased by about 14.8% when using ChoA compared with GWO, while The MSE has decreased by about 60% ChoA compared with PSO and WOA. The ANOVA results with p-values in Table 5 show that the regression performance of LSTM-ChoA was significantly better (p-value < 0.05) on the ten independent runs. Moreover, the LSTM-ChoA model was compared to regression models (RF, SVM, and KNN) as baseline models. Through extensive experimentation, ChoA demonstrated superior performance in finding the optimal LSTM Model Hyperparameters. Consequently, the LSTM-ChoA model exhibited remarkable prediction capacity and achieved the highest accuracy among all the tested models. Hence, it can be deduced that the suggested LSTM-ChoA model surpasses alternative models in terms of prediction accuracy.
The preceding overview and analysis represent the insights derived from this experiment. By comprehending the principles of the Chimp Optimization Algorithm, enhancements were made to the original LSTM algorithm, resulting in improved prediction efficiency. Consequently, future research will be valuable for exploring its application in more intricate domains, such as multi-objective optimization (Supplementary Information S1).
Data availability
The datasets analyzed during the current study are available on the Kaggle website, [https://www.kaggle.com/competitions/ventilator-pressure-prediction].
References
Alam, M. J., Rabbi, J., & Ahamed, S. Forecasting pressure of ventilator using a hybrid deep learning model built with Bi-LSTM and Bi-GRU to simulate ventilation. arXiv Prepr. arXiv:2302.09691 (2023).
Strodthoff, C., Frerichs, I., Weiler, N., & Bergh, B. Predicting and simulating effects of PEEP changes with machine learning. medRxiv, pp. 2001–2021 (2021).
Schalekamp, S. et al. Model-based prediction of critical illness in hospitalized patients with COVID-19. Radiology 298(1), E46–E54 (2021).
Belgaid, A. Deep sequence modeling for pressure controlled mechanical ventilation. medRxiv, pp. 2003–2022 (2022).
Zhang, K., Karanth, S., Patel, B., Murphy, R., & Jiang, X. Real-time prediction for mechanical ventilation in COVID-19 patients using a multi-task gaussian process multi-objective self-attention network. arXiv Prepr. arXiv:2102.01147 (2021).
Arellano, A., Bustamante, E., Garza, C., & Ponce, H. Ventilator pressure prediction using a regularized regression model. In Advances in Computational Intelligence: 21st Mexican International Conference on Artificial Intelligence, MICAI 2022, Monterrey, Mexico, October 24--29, 2022, Proceedings, Part II, pp. 348–355 (2022).
Meena, P., Sharma, P., & Sharma, K. Optimizing control of IOT device using traditional machine learning models and deep neural networks. In 2022 6th International Conference on Computing Methodologies and Communication (ICCMC), pp. 445–451 (2022).
Siami-Namini, S., Tavakoli, N., & Namin, A. S. The performance of LSTM and BiLSTM in forecasting time series. In 2019 IEEE International Conference on Big Data (Big Data), pp. 3285–3292 (2019).
Zhang, L. et al. Detection of patient-ventilator asynchrony from mechanical ventilation waveforms using a two-layer long short-term memory neural network. Comput. Biol. Med. 120, 103721 (2020).
Kulkarni, A. R. et al. Deep learning model to predict the need for mechanical ventilation using chest X-ray images in hospitalised patients with COVID-19. BMJ Innov. 7(2), 1 (2021).
Jia, Y., Kaul, C., Lawton, T., Murray-Smith, R. & Habli, I. Prediction of weaning from mechanical ventilation using convolutional neural networks. Artif. Intell. Med. 117, 102087 (2021).
Chang, W. et al. A machine-learning method of predicting vital capacity plateau value for ventilatory pump failure based on data mining. Healthcare 9(10), 1306 (2021).
Rehm, G. B., Kuhn, B. T., Nguyen, J., Anderson, N. R., Chuah, C.-N., & Adams, J. Y. Improving mechanical ventilator clinical decision support systems with a machine learning classifier for determining ventilator mode. MedInfo,, pp. 318–322 (2019).
Rahmani Hosseinabadi, A. A., Vahidi, J., Saemi, B., Sangaiah, A. K. & Elhoseny, M. Extended genetic algorithm for solving open-shop scheduling problem. Soft Comput. 23, 5099–5116 (2019).
Wei, L., Zhang, Z., Zhang, D. & Leung, S. C. H. A simulated annealing algorithm for the capacitated vehicle routing problem with two-dimensional loading constraints. Eur. J. Oper. Res. 265(3), 843–859 (2018).
Hossain, S. I., Akhand, M. A. H., Shuvo, M. I. R., Siddique, N. & Adeli, H. Optimization of university course scheduling problem using particle swarm optimization with selective search. Expert Syst. Appl. 127, 9–24 (2019).
Choong, S. S., Wong, L.-P. & Lim, C. P. An artificial bee colony algorithm with a modified choice function for the traveling salesman problem. Swarm Evol. Comput. 44, 622–635 (2019).
Mohamed, A. W. & Mohamed, A. K. Adaptive guided differential evolution algorithm with novel mutation for numerical optimization. Int. J. Mach. Learn. Cybern. 10, 253–277 (2019).
Boryczka, U. & Szwarc, K. The harmony search algorithm with additional improvement of harmony memory for asymmetric traveling salesman problem. Expert Syst. Appl. 122, 43–53 (2019).
Zouache, D., Moussaoui, A. & Ben Abdelaziz, F. A cooperative swarm intelligence algorithm for multi-objective discrete optimization with application to the knapsack problem. Eur. J. Oper. Res. 264(1), 74–88 (2018).
Wang, G. & Guo, L. A novel hybrid bat algorithm with harmony search for global numerical optimization. J. Appl. Math. 1, 1 (2013).
Lazim, D., Zain, A. M., Bahari, M. & Omar, A. H. Review of modified and hybrid flower pollination algorithms for solving optimization problems. Artif. Intell. Rev. 52, 1547–1577 (2019).
Sayed, G. I., Tharwat, A. & Hassanien, A. E. Chaotic dragonfly algorithm: An improved metaheuristic algorithm for feature selection. Appl. Intell. 49, 188–205 (2019).
Hatta, N. M., Zain, A. M., Sallehuddin, R., Shayfull, Z. & Yusoff, Y. Recent studies on optimisation method of Grey Wolf Optimiser (GWO): A review (2014–2017). Artif. Intell. Rev. 52, 2651–2683 (2019).
Abd El Aziz, M., Ewees, A. A. & Hassanien, A. E. Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Syst. Appl. 83, 242–256 (2017).
Wang, G.-G., Deb, S. & Coelho, L. D. S. Earthworm optimisation algorithm: a bio-inspired metaheuristic algorithm for global optimisation problems. Int. J. Bio-Inspired Comput. 12(1), 1–22 (2018).
Wang, G.-G., Deb, S., Gao, X.-Z. & Coelho, L. D. S. A new metaheuristic optimisation algorithm motivated by elephant herding behaviour. Int. J. Bio-Inspired Comput. 8(6), 394–409 (2016).
Xue, J., Wu, Y., Shi, Y., & Cheng, S. Brain storm optimization algorithm for multi-objective optimization problems. In Advances in Swarm Intelligence: Third International Conference, ICSI 2012, Shenzhen, China, June 17-20, 2012 Proceedings, Part I 3, pp. 513–519 (2012).
Tan, Y., & Zhu, Y. Fireworks algorithm for optimization. In Advances in Swarm Intelligence: First International Conference, ICSI 2010, Beijing, China, June 12-15, 2010, Proceedings, Part I 1, pp. 355–364 (2010).
Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 10(2), 151–164 (2018).
Jain, M., Singh, V. & Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 44, 148–175 (2019).
Mirjalili, S. et al. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017).
Pascanu, R., Gulcehre, C., Cho, K., & Bengio, Y. How to construct deep recurrent neural networks. arXiv Prepr. arXiv:1312.6026 (2013).
Glorot, X., & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256 (2010).
Pascanu, R., Mikolov, T., & Bengio, Y. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310–1318 (2013).
Kingma, D. P., & Ba, J. Adam: A method for stochastic optimization. arXiv Prepr. arXiv:1412.6980 (2014).
Bengio, S., Vinyals, O., Jaitly, N. & Shazeer, N. Scheduled sampling for sequence prediction with recurrent neural networks. Adv. Neural Inf. Process. Syst. 28, 1 (2015).
Ba, J. L., Kiros, J. R., & Hinton, G. E. Layer normalization. arXiv Prepr. arXiv:1607.06450 (2016).
Huang, G., Sun, Y., Liu, Z., Sedra, D., & Weinberger, K. Q. Deep networks with stochastic depth. in Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, 2016, pp. 646–661.
Bottou, L., Curtis, F. E. & Nocedal, J. Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018).
Tuerxun, W. et al. A wind power forecasting model using LSTM optimized by the modified bald eagle search algorithm. Energies 15(6), 2031 (2022).
“Google Brain - Ventilator Pressure Prediction,” 2021. https://www.kaggle.com/competitions/ventilator-pressure-prediction/data (accessed Jun. 23, 2023).
Arvind, V., Kim, J. S., Cho, B. H., Geng, E. & Cho, S. K. Development of a machine learning algorithm to predict intubation among hospitalized patients with COVID-19. J. Crit. Care 62, 25–30 (2021).
Xu, Y. et al. Research on particle swarm optimization in LSTM neural networks for rainfall-runoff simulation. J. Hydrol. 608, 127553 (2022).
Deif, M. A. et al. A deep bidirectional recurrent neural network for identification of SARS-CoV-2 from viral genome sequences. Math. Biosci. Eng. 18, 6 (2021).
Deif, M. A., Hammam, R. E. & Solyman, A. A. A. Gradient boosting machine based on PSO for prediction of leukemia after a breast cancer diagnosis. Int. J. Adv. Sci. Eng. Inf. Technol. 11(12), 508–515 (2021).
Deif, M. A., Solyman, A. A. A. & Hammam, R. E. ARIMA model estimation based on genetic algorithm for COVID-19 mortality rates. Int. J. Inf. Technol. Decis. Mak. 20(6), 1775–1798 (2021).
Wang, S., Wang, B., Zhang, Z., Heidari, A. A. & Chen, H. Class-aware sample reweighting optimal transport for multi-source domain adaptation. Neurocomputing 523, 213–223 (2023).
Li, J. et al. Eres-UNet++: Liver CT image segmentation based on high-efficiency channel attention and Res-UNet++. Comput. Biol. Med. 158, 106501 (2023).
Liu, G. et al. Cx22: A new publicly available dataset for deep learning-based segmentation of cervical cytology images. Comput. Biol. Med. 150, 106194 (2022).
Khishe, M. & Mosavi, M. R. Chimp optimization algorithm. Expert Syst. Appl. 149, 113338 (2020).
Pashaei, E. & Pashaei, E. An efficient binary chimp optimization algorithm for feature selection in biomedical data classification. Neural Comput. Appl. 34(8), 6427–6451 (2022).
Du, N., Zhou, Y., Deng, W. & Luo, Q. Improved chimp optimization algorithm for three-dimensional path planning problem. Multimed. Tools Appl. 81(19), 27397–27422 (2022).
Du, N., Luo, Q., Du, Y. & Zhou, Y. Color image enhancement: A metaheuristic chimp optimization algorithm. Neural Process. Lett. 54(6), 4769–4808 (2022).
Wang, Y., Liu, H., Ding, G. & Tu, L. Adaptive chimp optimization algorithm with chaotic map for global numerical optimization problems. J. Supercomput. 79(6), 6507–6537 (2023).
Bo, Q., Cheng, W. & Khishe, M. Evolving chimp optimization algorithm by weighted opposition-based technique and greedy search for multimodal engineering problems. Appl. Soft Comput. 132, 109869 (2023).
Sharma, A. & Nanda, S. J. A multi-objective chimp optimization algorithm for seismicity de-clustering. Appl. Soft Comput. 121, 108742 (2022).
Cai, C. et al. Improved deep convolutional neural networks using chimp optimization algorithm for Covid19 diagnosis from the X-ray images. Expert Syst. Appl. 213, 119206 (2023).
Deif, M. A., Hammam, R. E., Solyman, A., Alsharif, M. H. & Uthansakul, P. Automated triage system for intensive care admissions during the COVID-19 pandemic using hybrid XGBoost-AHP approach. Sensors 21(19), 6379 (2021).
Deif, M. A., Solyman, A. A. A., Alsharif, M. H., Jung, S. & Hwang, E. A hybrid multi-objective optimizer-based SVM model for enhancing numerical weather prediction: a study for the Seoul metropolitan area. Sustainability 14(1), 296 (2021).
Hammam, R. E., et al. Research article prediction of wear rates of UHMWPE bearing in hip joint prosthesis with support vector model and grey wolf optimization (2022).
Deif, M. A., Hammam, R. E., Hammam, R. & Solyman, A. Adaptive Neuro-fuzzy inference system (ANFIS) for rapid diagnosis of COVID-19 cases based on routine blood tests. Int. J. Intell. Eng. Syst. 14(2), 178–189 (2021).
Ahmed, Q. I., Attar, H., Amer, A., Deif, M. A. & Solyman, A. A. A. Development of a hybrid support vector machine with grey wolf optimization algorithm for detection of the solar power plants anomalies. Systems 11(5), 237 (2023).
Deif, M. A. et al. A new feature selection method based on hybrid approach for colorectal cancer histology classification. Wirel. Commun. Mob. Comput. 1, 1 (2022).
Deif, M. A. et al. Diagnosis of Oral squamous cell carcinoma using deep neural networks and binary particle swarm optimization on histopathological images: An AIoMT approach. Comput. Intell. Neurosci. 1, 1 (2022).
Baghdadi, N., Maklad, A. S., Malki, A. & Deif, M. A. Reliable sarcoidosis detection using chest X-rays with efficientnets and stain-normalization techniques. Sensors 22(10), 3846 (2022).
Deif, M. A. & Hammam, R. E. Skin lesions classification based on deep learning approach. J. Clin. Eng. 45(3), 155–161 (2020).
Mokhtar, E. M. O. & Deif, M. A. Towards a self-sustained house: development of an analytical hierarchy process system for evaluating the performance of self-sustained houses. Eng. J. 2, 2 (2023).
Deif, M. A. & Eldosoky, M. A. A. Adaptive neuro-fuzzy inference system for classifcation of urodynamic test. Int. J. Comput. Appl. 118, 16 (2015).
Sable, N. P., Wanve, O., Singh, A., Wable, S., & Hanabar, Y. Pressure prediction system in lung circuit using deep learning. in ICT with Intelligent Applications: Proceedings of ICTIS 2022, Volume 1, Springer, 2022, pp. 605–615.
Acknowledgements
The authors extend their appreciation to Princess Nourah bint Abdulrahman University Researchers Supporting Project for funding this research work through the project number (PNURSP2023R279), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Funding
Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R279), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.
Author information
Authors and Affiliations
Contributions
Formulation, F.R.A. and M.A.D.; methodology, M.A.D. and S.M.F.; software, M.A.D.; validation and investigation, S.M.F. and F.R.A.; formal analysis, S.A.A. and S.M.F.; resources, F.R.A. and S.M.F.; preparing original draft, M.A.D. and S.M.F.; review and editing of writing, S.A.A. and S.M.F.; supervision, S.A.A.; funding and visualization, S.A.A. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ahmed, F.R., Alsenany, S.A., Abdelaliem, S.M.F. et al. Development of a hybrid LSTM with chimp optimization algorithm for the pressure ventilator prediction. Sci Rep 13, 20927 (2023). https://doi.org/10.1038/s41598-023-47837-8
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-023-47837-8
This article is cited by
-
A Comprehensive Overview of PSO-LSTM Approaches: Applications, Analytical Insights, and Future Opportunities
Archives of Computational Methods in Engineering (2025)
-
A Comprehensive Survey of Hybrid Whale Optimization Algorithm with Long-Short Term Memory: Applications, Improvements, and Future Perspective
Archives of Computational Methods in Engineering (2025)
-
Improved random forest for titanium alloy milling force prediction based on finite element-driven
Journal of the Brazilian Society of Mechanical Sciences and Engineering (2024)














