Abstract
This work investigates the utilization of ensemble machine learning methods in forecasting the distribution of chemical concentrations in membrane separation system for removal of an impurity from water. Mass transfer was evaluated using CFD and machine learning performed numerical simulations. A membrane contactor was employed for the separation and mass transfer analysis for the removal of organic molecules from water. The process is simulated via computational fluid dynamics and machine learning. Utilizing a dataset of over 25,000 data points with r(m) and z(m) as inputs, four tree-based learning algorithms were employed: Decision Tree (DT), Extremely Randomized Trees (ET), Random Forest (RF), and Histogram-based Gradient Boosting Regression (HBGB). Hyper-parameter optimization was conducted using Successive Halving, a method aimed at efficiently allocating computational resources to optimize model performance. The ET model emerged as the top performer, with R² of 0.99674. The ET model exhibited a RMSE of 37.0212 mol/m³ and a MAE of 19.6784 mol/m³. The results emphasize the capability of ensemble machine learning techniques to accurately estimate solute concentration profiles in membrane engineering applications.
Similar content being viewed by others
Introduction
There are different separation methods which can be applied for removal of water impurities in wastewater treatment such as membrane and adsorption1,2. Removal of organic molecules from water and wastewater streams can be carried out by using membrane processes such as membrane contactors which provide a high surface area for separation. This process relies on contacting two phases by porous membranes, and the separation can be achieved by mass transfer through tube side, membrane phase and shell phase3,4,5. The process of membrane separation is versatile and can be optimized by manipulating process parameters as well as membrane properties6. It has been reported that the process parameters for membrane contactors mainly include feed and solvent flow rate as well as temperature, while the membrane properties mainly include membrane porosity, pore size, tortuosity, and material7,8,9. Computational models can be utilized to quantify the influence of these parameters on the membrane separation efficiency for water treatment applications.
Mass transfer is the major phenomenon responsible for separation and removal of solutes from water using membrane contactors. Both convective and diffusive mass transfer mechanisms should be taken into account for analysis of separation efficiency in membrane contactors. For mass transfer modeling membrane contactors, chemical reactions can be added in the case of reactions between feed and solvent, however it is usually neglected for physical separation. The controlling phase for transport of species from feed to the shell side can be determined by using computational methods such as computational fluid dynamics (CFD). The method of CFD has been extensively used for simulation of membrane contactors in gas and liquid phase separation. It works based on numerical solution of mass, momentum, and energy equations in the three compartments of a membrane module10,11. For building and developing the CFD model, the data produced by CFD can be used in combination with machine learning which is a new approach in membrane modeling. Once the hybrid model (CFD-machine learning) has been derived, it can be also used in prediction and analysis of transport phenomena in membrane systems to enhance separation efficiency and implement sophisticated process control. Various machine learning algorithms can be tested to find the best one for prediction of membrane contactor process. To address this research gap, a methodology is suggested in this work based on machine learning and CFD for modeling membrane contactors in liquid phase removal applicable for water treatment.
Machine learning methods such as ensembles have revolutionized the field of predictive modeling, offering enhanced accuracy and robustness in analyzing complex phenomena in separation and environmental science12. Some separation processes such as adsorption and membrane can be simulated using mechanistic modeling and machine learning algorithms13,14,15. In the discipline of chemical engineering, the ability to predict chemical concentration distributions within reactors and separators is crucial for optimizing processes and ensuring efficiency. This study delves into the effectiveness of tree-based ensemble techniques alongside single Decision Tree (DT) in forecasting chemical concentrations of solute in a membrane contactor process, utilizing a rich dataset comprising spatial coordinates and substance concentrations. The data are obtained as the output of CFD simulation of mass transfer in a membrane contactor which is hollow-fiber contactor type. By employing four tree-based learning algorithms - DT, Extremely Randomized Trees (ET), Random Forest (RF), and Histogram-based Gradient Boosting Regression (HBGB) - this research aims to identify the most adept model for predicting chemical concentration profiles. Through hyperparameter optimization and rigorous outlier detection processes, the study seeks to enhance the reliability and integrity of the predictive models, ultimately providing valuable insights for practical applications in chemical engineering processes.
The DT model creates a hierarchical structure by repeatedly dividing the data into smaller groups based on the most important characteristic, with the goal of reducing impurity in each division. This makes the model easy to interpret and appropriate for comprehending intricate connections. ET, in contrast, utilizes random feature selection and thresholds for node splitting. This technique generates a group of randomized DTs that collectively minimize overfitting and enhance robustness. It is especially effective in handling noisy data. RF enhances this concept by combining predictions from multiple DTs, each trained on a random subset of features and data instances. This leads to better generalization and decreased variance. HBGB constructs DTs in a sequential manner, where each subsequent tree focuses on the residuals of the previous trees. This approach effectively reduces errors by iteratively adjusting to the negative gradients of a loss function. As a result, it achieves excellent predictive performance through gradient boosting16.
Dataset
The dataset consists of more than 25,000 data points, including the r(m) and z(m) coordinates as well as the concentration (C) of a substance measured in moles per cubic meter (mol/m³). The data are the outcomes from CFD simulation of mass transfer in a membrane contactor14. A single hollow fiber was considered for modeling, and mass transfer equations (diffusion-convection model) were solved for all sections of module via finite element scheme17,18. Laminar flow was assumed with inlet concentration boundary condition, while convective flux was used as the outlet boundary for the computations. When CFD simulation was executed, the concentration distribution of solute in the shell side of membrane contactor was generated which is in the form of C versus r and z as cylindrical coordinates14. The statistics of the dataset are shown in Table 1. The box plots of variables are displayed in Fig. 1. The raw data are reported in the Supplementary file reporting the concentration distribution of solute in the membrane contactor (shell side) at various locations.
In this research, influential data points were identified and excluded using Cook’s distance (Fig. 2), a well-known method in regression analysis for detecting observations that may excessively affect model estimates. This technique evaluates how the omission of a specific data point alters the regression model’s overall fit, thus helping to pinpoint outliers or highly influential cases. Through the computation of Cook’s distance for each data point in the dataset, observations exerting substantial influence on the coefficients and predictions of the regression model were successfully identified19,20. After identifying these outliers, they were thoroughly examined to evaluate their accuracy and potential influence on the model’s performance. Through this rigorous outlier detection process, the robustness and reliability of the ensemble machine learning models in predicting chemical concentration distributions were enhanced, thereby strengthening the integrity of the findings and their applicability to chemical engineering processes.
Methodology
Decision tree (DT)
DT Regression serves as an ML model employed for forecasting continuous numerical values. The algorithm builds a hierarchical model that divides the dataset into segments guided by the values of specific features. This process generates decision rules that can be used to predict the target variable21.
DT regression entails the creation of a decision tree through iterative division of the dataset, using the values of various features as criteria for splitting. At every internal node, the data is divided by selecting a feature and its corresponding threshold. The criteria for splitting are commonly determined by metrics such as MSE or variance reduction. The procedure keeps going until a preset stopping point—such as getting to the deepest point or meeting a certain number of instances in a leaf or terminal node—is reached22.
Random forest (RF)
The RF utilizes voting to improve the performance of learners consisting of multiple base trees. RF is particularly favored due to its ability to predict numerous outcomes with minimal parameters. It excels in handling high-dimensional feature spaces and small sample sizes accurately, making it a popular choice. Moreover, its parallelizability allows it to efficiently handle large-scale systems in real-world applications23,24.
In order to build an RF model, the original dataset is divided into N instance sets using a bootstrapping technique. An unpruned regression tree is generated for each bootstrapped sample. Instead of using all available predictors, a random selection of K base models is chosen to perform the splitting function. This process is repeated until the desired properties are achieved. Currently, unobserved data is estimated by combining the predictions from multiple C trees. RF achieves maximum tree diversity and minimum model variance by constructing decision trees from various training subsets. The RF regression predictor is represented by the equation25:
Here, C denotes the number of decision trees, and x represents the data point. The model \(\:{T}_{i}\left(x\right)\) signifies a unique decision tree.
RF facilitates the estimation of out-of-bag (OOB) error by analyzing instances not selected during the bagging process. This approach provides an unbiased estimate of generalization error without relying on external data. Furthermore, each input variable is assigned a significant score. The model computes the mean reduction in performance when a single input variable is altered while keeping all other variables unchanged.
Extremely randomized trees (ET)
The ET, also known as extra trees, were introduced by Geurts et al.26 and belong to the family of decision tree-based models. Since its inception, ET has often been regarded as an enhanced iteration of the RF model. However, notable distinctions between the two have been emphasized by various researchers. Firstly, ET constructs trees utilizing all training patterns with varied parameters, whereas RF employs a bagging procedure. Secondly, while the RF model selects the best split, ET randomly chooses node splits during each tree’s construction phase. Notably, the ET model has demonstrated significant improvements, notably in reducing variance and marginally increasing bias in prediction models, all while maintaining low computational costs. From a mathematical perspective, ET consists of a set of decision trees (T), where each tree (\(\:t\in\:1\dots\:T\)) individually utilizes all training patterns during the training process for constructing decision or regression trees.
Let’s denote the set of training patterns as \(\:\mathcal{D}=\{\left({x}_{1},{y}_{1}\right)\dots\:,\left({x}_{n},{y}_{n}\right)\}\), where \(\:{x}_{i}\) represents the feature vectors and \(\:{y}_{i}\) represents the corresponding target values. For each tree t, ET utilizes the entire dataset \(\:\mathcal{D}\) and employs varying parameters to construct decision or regression trees.
Histogram-based gradient boosting regression (HBGB)
HBGB is a powerful ensemble learning technique that combines the principles of gradient boosting with histogram-based algorithms for efficient computation. Unlike traditional gradient boosting, which typically utilizes decision trees as base learners, HBGB constructs histograms of feature values to accelerate the training process while maintaining predictive accuracy27,28. HBGB minimizes a loss function \(\:L\left(y,F\left(x\right)\right)\), where \(\:y\) is the true value and \(\:F\left(x\right)\) is the predicted output, starting with an initial estimate \(\:{F}_{0}\left(x\right)=\text{mean}\left(y\right)\). At each iteration, a regression tree \(\:{h}_{m}\left(x\right)\) is fit to the negative gradient of the loss, and predictions are updated as \(\:{F}_{m}\left(x\right)={F}_{m-1}\left(x\right)+{\upeta\:}\cdot\:{h}_{m}\left(x\right)\), where \(\:{\upeta\:}\) is the learning rate. Histogram binning accelerates tree building by grouping continuous features into discrete bins, significantly reducing computation and memory usage. The process continues until a predefined number of iterations is reached or performance no longer improves significantly.
HBGB effectively utilizes the advantages of gradient boosting while incorporating histogram-based techniques to enhance computational efficiency and scalability. HBGB improves predictive performance by repeatedly fitting regression trees to the negative gradients of the loss function. This makes it well-suited for regression tasks on large datasets.
Successive halving
Successive Halving represents an iterative approach aimed at effectively navigating the hyperparameter space to pinpoint the optimal combination ensuring peak model performance29,30. The process of Successive Halving encompasses the following steps:
-
1.
Initialization: For each model, a set of hyperparameter configurations is first defined. These configurations include many hyperparameter combinations that must be evaluated while tuning.
-
2.
Training and Assessment: On a portion of the training set, each model is trained with the initial hyperparameter configurations. The efficacy of every model is then evaluated with a suitable metric, such mean squared error or cross-validation accuracy.
-
3.
Selection: Models demonstrating superior performance are singled out based on the evaluation metric. Underperforming models are discarded, retaining only a subset of models exhibiting top-tier performance.
-
4.
Halving: The quantity of models is progressively reduced by eliminating a portion of the remaining models. The proportion is determined by a pre-established halving coefficient. The models that have not been eliminated advance to the next iteration.
-
5.
Iteration: Continue steps 2 through 4 until there is just one model left. With its well-adjusted hyperparameters, this model is regarded as the best one.
This approach optimizes the exploration of the hyperparameter space, concentrating computational resources on the most effective combinations. This systematic approach streamlines the identification of hyperparameters that enhance performance in boosted models31.
To optimize the ML models, the Successive Halving algorithm was employed as an efficient hyperparameter optimization technique. The configuration used included an initial pool of 64 candidate configurations per model, a halving factor of 2, and a maximum iteration limit of 6 rounds. Each candidate was trained and validated using 5-fold cross-validation. In each round, only the top 45% of candidates (based on performance) were retained for the next iteration, effectively concentrating resources on the most promising configurations.
The objective function for optimization was to maximize the average R² score across the five folds. This metric was chosen for its ability to reflect the proportion of variance explained by the model, making it suitable for evaluating regression performance.
The hyperparameter search space for each model is summarized in Table 2. The ranges were selected based on prior experimentation and model-specific characteristics.
After hyperparameter tuning, all models showed measurable improvement in predictive performance. The DT model exhibited a 9.6% increase in R², RF improved by 4.1%, ET improved by 2.7%, and HBGB showed a 3.3% enhancement. These results highlight the effectiveness of Successive Halving in refining model performance while reducing computational overhead compared to exhaustive search techniques.
Results and discussion
The performance of optimized tree-based models—DT, RF, ET, and HBGB—was evaluated for predicting concentration (C) based on spatial coordinates (r, z). Table 3 summarizes the final results of these models on test set for the comparison. The optimized models for correlation of the dataset have been compared using parameters including R2 score, RMSE, and MAE.
The ET model emerged as the top performer with the reported highest R² of 0.99674. This indicates an exceptional capability to explain approximately 99.67% of the variance in the chemical concentration distribution. Furthermore, the ET model demonstrated the lowest RMSE of 37.0212 mol/m³ and MAE of 19.6784 mol/m³, signifying its superior accuracy in predicting chemical concentration profiles. Figure 3 compares predicted to expected values using the ET model. It can be seen that the accuracy of the data fitting is great even with the large number of dataset which could be attributed to the proper optimization employed for this model.
Following closely, the RF model attained an impressive R² score of 0.989, revealing a high level of fitting accuracy within the concentration dataset of membrane. The RF model exhibited an RMSE of 67.5164 mol/m³ and an MAE of 51.3307 mol/m³, showcasing its efficacy in capturing complex spatial relationships and predicting chemical concentration distributions. Figure 4 presents a comparison between the values predicted by the RF model and the values that were expected.
The HBGB model also demonstrated commendable performance, yielding an R² score of 0.99601. While slightly lower than the ET model, the HBGB model showcased competitive RMSE and MAE values of 41.5091 mol/m³ and 29.8225 mol/m³, respectively, indicating its effectiveness in accurately predicting chemical concentration distributions. Figure 5 illustrates a juxtaposition of the values predicted by the HBGB model and the values that were anticipated.
In contrast, the Decision Tree (DT) model, while providing reasonable predictions with an R² score of 0.94224, exhibited higher RMSE and MAE values of 156.329 mol/m³ and 104.199 mol/m³, respectively. This suggests that the DT model, while effective, may not capture the intricacies of the data as well as the ensemble methods. Figure 6 compares the DT model’s predicted values to the expected values.
Finally, despite the fact that the existing models all have good performance, the ET model has been chosen as the main model of this research. The learning curve shown in Fig. 7 validates the ET performance. It demonstrates the variation in training and cross-validation scores of the model as the quantity of training examples rises. Significantly, as the quantity of training examples escalates, both scores converge and stabilize, signifying that the model exhibits robust performance with novel data. This convergence signifies that the model is proficiently assimilating knowledge from the training data and can generate precise predictions on novel instances, thereby validating its efficacy and resilience.
Using the best model (ET), partial effect of the inputs on the output values shown in Figs. 8 and 9. Figure 10 also illustrates the concentration as a function of inputs; Fig. 11 displays the contour plot of this variable. The concentration change is seen to be greater in radial direction as a result of molecular diffusion in the shell side. It should be pointed out that the distribution of C has been obtained and shown for the shell side of membrane contactor14. For the modeling via CFD method, both axial and radial diffusion were considered in which the contribution of axial is not considerable compared to the radial diffusion. In fact, the convection is dominant in the axial direction due to the effect of fluid flow in this direction in the membrane contactor. An agreement has been realized for the results obtained in this study with the results reported by Thajudeen et al.14 in simulation of membrane contactor using CFD and ML.
The highly accurate predictions from the ET model provide a valuable tool for practical optimization of membrane contactor systems. By simulating a wide range of input parameter combinations—beyond those present in the original dataset—the model can be used to conduct an exhaustive search for optimal membrane properties such as porosity and thickness, as well as operational conditions like flow rate and temperature. This enables data-driven adjustments to enhance separation efficiency and system performance, facilitating informed design and control strategies in real-world applications.
Conclusion
In conclusion, this study highlights the significant impact of tree-based (specially ensembles) machine learning models, particularly the ET model, in accurately predicting chemical concentration distributions within a membrane contactor based on spatial coordinates. By employing a methodical process that includes fine-tuning the hyperparameters and identifying and handling outliers, the ET model demonstrated superior performance, achieving an outstanding R² score of 0.99674. This score indicates the model’s exceptional ability to account for approximately 99.67% of the variations in chemical concentration profiles. The findings reveal that the ET model achieves notably high accuracy, reflected in its minimal RMSE and MAE scores. Overall, this work illustrates the robustness of ensemble-based ML approaches in enhancing the reliability and accuracy of predictive outcomes for practical applications in membrane engineering processes.
Data availability
The datasets used during the current study are available from the corresponding author on reasonable request.
References
Chen, Y. et al. Recent advances in layered double hydroxides for pharmaceutical wastewater treatment: A critical review. Crit. Rev. Environ. Sci. Technol. 55(14), 1097–1123 (2025).
Vatanpour, V. et al. Advances in self-cleaning membranes for effective water and wastewater treatment. Sep. Purif. Technol. 373, 133539 (2025).
Agrahari, G. K., Verma, N. & Bhattacharya, P. K. Removal of benzoic acid from water by reactive extraction using hollow fiber membrane contactor: experiment and modeling. CLEAN–Soil, Air, Water, 42(7): pp. 901–908. (2014).
Cancilla, N. et al. A porous media CFD model for the simulation of Hemodialysis in Hollow fiber membrane modules. J. Membr. Sci. 646, 120219 (2022).
Kiani, S. et al. Enhancement of CO2 Removal by Promoted MDEA Solution in a Hollow fiber Membrane Contactor: A Numerical and Experimental Study2p. 100028 (Carbon Capture Science & Technology, 2022).
Khan, M. S. et al. Enhancing desalination efficiency and Boron removal through functionalization of layered double hydroxide thin-film nanocomposite membranes. Chem. Eng. J. 515, 163730 (2025).
Nakhjiri, A. T. et al. Influence of non-wetting, partial wetting and complete wetting modes of operation on hydrogen sulfide removal utilizing monoethanolamine absorbent in Hollow fiber membrane contactor. Sustainable Environ. Res. 28(4), 186–196 (2018).
Rahim, N. A., Ghasem, N. & Al-Marzouqi, M. Stripping of CO2 from different aqueous solvents using PVDF Hollow fiber membrane contacting process. J. Nat. Gas Sci. Eng. 21, 886–893 (2014).
Rasaie, M. et al. Highly Selective Physical/Chemical CO2 Separation by Functionalized Fe3O4 Nanoparticles in Hollow Fiber Membrane Contactors: Experimental and Modeling Approaches36p. 4456–4469 (Energy & Fuels, 2022). 8.
Ma, C. et al. CFD simulations of fiber-fiber interaction in a Hollow fiber membrane bundle: Fiber distance and position matters. Sep. Purif. Technol. 209, 707–713 (2019).
Shirazian, S., Moghadassi, A. & Moradi, S. Numerical simulation of mass transfer in gas-liquid Hollow fiber membrane contactors for laminar flow conditions. Simul. Model. Pract. Theory. 17(4), 708–718 (2009).
Yu, H. et al. Explainable molecular simulation and machine learning for carbon dioxide adsorption on magnesium oxide. Fuel 357, 129725 (2024).
Honglei, Y. et al. Insights into the diffusion coefficient and adsorption energy of NH3 in MgCl2 from molecular simulation, experiments, and machine learning. J. Mol. Liq. 395, 123822 (2024).
Thajudeen, K. Y., Ahmed, M. M. & Alshehri, S. A. Integration of machine learning and CFD for modeling mass transfer in water treatment using membrane separation process. Sci. Rep. 14(1), 23970 (2024).
Fan, L. et al. Application of machine learning to predict the fluoride removal capability of MgO. J. Environ. Chem. Eng. 13(1), 115317 (2025).
Cutler, A., Cutler, D. R. & Stevens, J. R. Tree-based methods, in High-Dimensional Data Analysis in Cancer Research. Springer. pp. 1–19. (2008).
Nakhjiri, A. T. et al. Modeling and simulation of CO2 separation from CO2/CH4 gaseous mixture using potassium glycinate, potassium argininate and sodium hydroxide liquid absorbents in the Hollow fiber membrane contactor. J. Environ. Chem. Eng. 6(1), 1500–1511 (2018).
Qatezadeh Deriss, A., Langari, S. & Taherian, M. Computational Fluid Dynamics Modeling of Ibuprofen Removal Using a Hollow fiber Membrane Contactor40p. e13490 (Environmental Progress & Sustainable Energy, 2021). 1.
Dı́az-Garcı́a, J. A. & González-Farı́as, G. A note on the cook’s distance. J. Stat. Plann. Inference. 120(1–2), 119–136 (2004).
Banerjee, M. Cook’s distance in linear longitudinal models. Commun. Statistics-Theory Methods. 27(12), 2973–2983 (1998).
Gottard, A., Vannucci, G. & Marchetti, G. M. A note on the interpretation of tree-based regression models. Biom. J. 62(6), 1564–1573 (2020).
Rokach, L. & Maimon, O. Decision trees. Data mining and knowledge discovery handbook, : pp. 165–192. (2005).
Breiman, L. Random forests. Mach. Learn. 45, 5–32 (2001).
Pavlov, Y. L. Random Forests, in Random Forests (De Gruyter, 2019).
Liaw, A. & Wiener, M. Classification and regression by randomforest. R News. 2(3), 18–22 (2002).
Geurts, P., Ernst, D. & Wehenkel, L. Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006).
Guryanov, A. Histogram-based algorithm for building gradient boosting ensembles of piecewise linear decision trees. in Analysis of Images, Social Networks and Texts: 8th International Conference, AIST Kazan, Russia, July 17–19, 2019, Revised Selected Papers 8. 2019. Springer. (2019).
Tiwari, H. & Kumar, S. Link Prediction in Social Networks using Histogram Based Gradient Boosting Regression Tree. in International Conference on Smart Generation Computing, Communication and Networking (SMART GENCON). 2021. IEEE. 2021. IEEE. (2021).
Soper, D. S. Hyperparameter optimization using successive halving with greedy cross validation. Algorithms 16(1), 17 (2022).
Yang, L. & Shami, A. On hyperparameter optimization of machine learning algorithms: theory and practice. Neurocomputing 415, 295–316 (2020).
Almohana, A. I., Ali Bu sinnah, Z. & Al-Musawi, T. J. Combination of CFD and machine learning for improving simulation accuracy in water purification process via porous membranes. J. Mol. Liq. 386, 122456 (2023).
Author information
Authors and Affiliations
Contributions
Suranjana V. Mayani: Writing, Investigation, Methodology, Software, Supervision.Hessan Mohammad: Writing, Visualization, Modeling, Resources.Soumya V. Menon: Writing, Investigation, Methodology, Data analytics.Rishabh Thakur: Writing, Validation, Methodology, Software.Abdulqader Faris Abdulqader: Writing, Conceptualization, Visualization, Software, Resources.Supriya S.: Writing, Investigation, Conceptualization, Data analytics.Prabhat Kumar Sahu: Writing, Investigation, Methodology, Software, Resources.Kamal Kant Joshi: Writing, Investigation, Validation, Conceptualization.All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Mayani, S.V., Mohammad, H., Menon, S.V. et al. Separation of organic molecules from water by design of membrane using mass transfer model analysis and computational machine learning. Sci Rep 15, 23471 (2025). https://doi.org/10.1038/s41598-025-09156-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-025-09156-y