Introduction

Breast cancer remains a leading cause of mortality and morbidity in women globally, accounting for approximately 24.5% of all cancer diagnoses and 15.5% of cancer-related deaths1,2. Notably, 2020 marked it as the most prevalent and lethal cancer for women in various countries3,4,5. Despite an unwavering rise in incidence, mortality rates have fortunately stagnated or declined in recent years, potentially due to advancements in treatment modalities and the widespread adoption of mammography screening programs, particularly in developed nations6,7,8. This paradoxical landscape of increasing incidence alongside stable or decreasing mortality underscores the critical need for accurate prognostic models, particularly those capable of predicting 5 year survival.

Despite a rising breast cancer incidence in Iran, underprivileged provinces experience slower increases, plausibly due to limited diagnostic infrastructure. Paradoxically, mortality rates currently remain lower in these regions. However, recent data suggest a potential trend reversal, foreshadowing a future rise in mortality within these communities9,10. Furthermore, Iran exhibits a younger age of diagnosis compared to many developed nations by approximately a decade11. Five-year and ten-year survival rates are estimated at 80% and 69%, respectively12. These discrepancies in survival across regions likely stem from disparities in early detection initiatives and access to adequate healthcare facilities13,14. This complex landscape underscores the need for tailored predictive models that account for such socio-economic and infrastructural variations.

Accurate 5 year survival prediction remains a critical, yet formidable, challenge for oncologists15,16,17,18,19. This task lies at the heart of personalized medicine, informing crucial treatment decisions impacting medication selection and dosage regimens20,21,22. Breast cancer prognosis remains a complex tapestry woven from diverse factors, encompassing patient demographics, tumor characteristics, biomarker profiles, and lifestyle habits7,23,24.

Machine learning (ML) and its subfield, deep learning (DL), which involves algorithms that analyze data in a manner like human reasoning25, have garnered substantial traction in oncology, particularly in the realm of diagnosis and detection using image processing26,27,28, and survival prediction29,30,31,32. These technologies offer compelling advantages, potentially aiding healthcare professionals at various treatment stages33,34,35,36. Notably, they hold the promise of enhancing technical parameters (e.g., treatment quality and speed) while generating valuable clinical insights37,38. Accurate survival models empower physicians to streamline decision-making, potentially minimizing false positives/negatives. For patients with lower predicted survival, this could inform the consideration of less invasive treatments with reduced side effects39,40,41. To our knowledge, there have been limited studies addressing both conventional machine learning approaches and deep learning methods for predicting breast cancer survival using non-image data. Building upon existing research (Table 1), this study aimed to develop and compare the DL and ML models for predicting 5 year breast cancer survival.

Table 1 Recent related studies.

Materials and methods

This section represents the characteristics of leveraged datasets and outlines the step-by-step procedures employed, encompassing the entire process from dataset preparation to model development and evaluation.

Data source and dataset characteristics

The dataset in this study included the data of 2644 patients from two centers. One of these centers was the Breast Diseases Research Center of Shiraz University of Medical Sciences, which supplied data of 1465 patients from 2003 to 2013. Another center was the Cancer Research Center of Shahid Beheshti University of Medical Sciences, that provided data of 1179 patients from 2008 to 2018. The former and the later datasets consisted of 151 and 66 variables, respectively.

Data preparation

Following identifying common variables in the two datasets and a comprehensive review of relevant literature, 34 variables were selected, as outlined in Table 2. To augment patient data, the initial step involved gathering specific values from the Health Information Systems at Tajrish Hospital in Tehran and Shahid Motahari Clinic in Shiraz. In the second step, a total of 643 successful telephone calls were conducted to collect information on patients’ survival status and lifestyle. Simultaneously, the survival status of patients who did not respond was verified through the Iran Health Insurance System. Patients whose survival status could not be investigated through any of these measures were subsequently excluded from the study. In total, data from 1875 patients were utilized, comprising 741 individuals with less than a 5 year survival and 1134 individuals with a 5 year or greater survival. Finally, the datasets were normalized and the missing values were managed using K-Nearest Neighbors imputer.

Table 2 Study variables. Quantified labels are indicated in parentheses before each variable state.

The overall survival of patients was determined by calculating the time interval between the diagnosis and the time of death. Specifically, if this interval exceeded 5 years or if the patient was alive with more than 5 years having elapsed since the diagnosis of breast cancer, the label was assigned as 1; otherwise, it was labeled as 0.

Model development

DNN along with conventional machine learning models, such as LR, NB, KNN, DT, RF, Extra Trees, SVM, Adaboost, GBoost, XGB, and MLP were used in this study. A brief explanation of each algorithm is provided below:

DNN It is a type of artificial neural network with multiple hidden layers between the input and output layers. DNNs are designed to automatically learn and model complex patterns by passing data through layered architectures50.

LR A statistical algorithm commonly applied to binary classification problems. It extends linear regression to model the likelihood of a dichotomous outcome (e.g., occurrence vs. non-occurrence of an event) by mapping predictions to probabilities51.

NB Bayes’ theorem is one of the fundamental principles in probability theory and mathematical statistics. In this algorithm, variables are assumed to be independent of each other52.

KNN A non-parametric classification algorithm that determines the class of a test sample based on the classes of its k nearest neighbors in the training data. The algorithm computes the distance between the test sample and all training samples to find these neighbors53.

DT A predictive model that uses a tree-like structure to make decisions based on sequential tests of input data. Each node represents a decision rule, and each branch represents the outcome of that rule, they ultimately lead to prediction or classification54.

RF A machine learning algorithm that combines multiple decision trees to improve the prediction accuracy and prevent overfitting. It operates by training each tree on a random subset of the data, with each tree providing a “vote” for the outcome, and the most common vote across all trees is selected as the final prediction55.

Extra Trees An ensemble learning method used for classification and regression tasks, which improves performance by aggregating predictions from multiple decision trees. Unlike traditional decision trees, Extra Trees are built with more randomization in the tree creation process, notably by selecting random splits at each node, which helps to avoid overfitting and enhances model accuracy56.

SVM A supervised machine learning algorithm works by finding the hyperplane that best separates different data classes in a high-dimensional space. SVM is particularly effective in handling complex, non-linear data by using kernel functions to transform data into higher dimensions, making it useful for accurate predictions57.

AdaBoost an ensemble machine learning technique that focuses on improving the performance of weak classifiers by sequentially combining them into a stronger classifier58.

GBoost A machine learning algorithm that iteratively builds a model by training a sequence of weak learners, typically decision trees, to correct the errors of previous ones59.

XGBoost A highly efficient machine learning algorithm based on gradient boosting principles, known for its accuracy and speed in solving regression and classification problems. XGBoost also integrates several advanced features like regularization, handling missing values, and parallelization, making it suitable for large datasets60.

MLP It is a type of artificial neural network composed of several layers of neurons, each of which processes data through nonlinear activation functions. This architecture enables MLPs to learn complex patterns in the data, making them suitable for classification and regression61.

The performance of these algorithms was thoroughly assessed and compared. Model development and evaluation were conducted using the Python programming language, leveraging scikit-learn42,45,47, TensorFlow42, and Autokeras libraries within the Jupyter Notebook environment.

Feature selection was conducted through three distinct approaches62,63. Initially, modeling was executed using all features, which were chosen based on a review of relevant articles while also taking into account dataset limitations. Subsequently, features were selected based on a two-tailed p value criterion (< 0.05) using scikit-learn. Lastly, features were chosen through a survey methodology. To identify essential variables, a questionnaire was formulated and completed by five oncology specialists, aiming to pinpoint essential features for the analysis.

As previously stated, the study involved modeling using a deep neural network and 11 conventional machine learning models. To train and fit the conventional machine learning models, the dataset was initially divided into two parts: the train set and the test set64. The train set served for model development, hyperparameter tuning, and initial training. The train set was partitioned into five folds (k = 5), and the modeling process was iterated five times. In each iteration, one of the folds served as the validation set, while the remaining four were used as the train set. The model underwent training on the train set for each iteration and was subsequently validated on the designated validation set. Different hyperparameters were employed during each fold, determined through the Grid Search method. This method systematically explores a predefined range of hyperparameter values to identify an optimal set that maximizes performance measures such as accuracy. Five distinct evaluation points, or different accuracy values, were obtained for each iteration. The highest accuracy level signified the most optimal model within that algorithm. As an example, Fig. 1 shows the DT algorithm training process, with Shiraz datasets using k-fold and the accuracy score in each fold.

Fig. 1
figure 1

The DT algorithm training process and the accuracy score in each fold. Yellow boxes show the validation set in each iteration.

For training the deep neural network, the dataset was initially divided into two parts: the train set and the test set. Subsequently, the Neural Architecture Search method, a technique in deep learning for automatically exploring optimal neural network architectures, was applied. Hyperparameters were adjusted following this exploration. To determine the optimal architecture and set hyperparameters, the Autokeras library was utilized. Figure 2 illustrates the model development process employing both Deep Neural Network and conventional machine learning techniques. Sets of hyperparameters and also the best parameters in each trained model are attached as supplementary information.

Fig. 2
figure 2

Dataset preparation and modeling process.

Performance evaluation

At this stage, the trained models underwent evaluation on the test set to derive the ultimate estimate of their performance on previously unseen data. The evaluation process encompassed two distinct approaches, as illustrated in Fig. 1. Firstly, each model was assessed on a subset of the same dataset used for training, constituting a cross-validation. Secondly, the evaluation was repeated on the dataset from the other center, effectively constituting an external validation. These two evaluation methods provided a comprehensive understanding of the models’ performance across both internal and external datasets65,66. It should be noted that the hyperparameter setting was exclusively performed on the training dataset. In this study, the following metrics were employed for evaluating the models.

  1. 1.

    Accuracy The percentage of people whose life status is correctly predicted.

    \(Accuracy = \frac{{True\,positive\left( {TP} \right) + True\,negative\left( {TN} \right)}}{{Total}} \times 100\)

    True positive (TP) indicates those individuals who are alive and are correctly predicted as alive. True negative (TN) indicates those individuals who have died and are correctly considered dead.

  2. 2.

    Specificity The percentage of individuals who have died and are labeled 0 and are correctly considered dead.

    \(Specificity = \frac{{True\,negative(TN)}}{{False\,positive\left( {FP} \right) + True\,negative(TN)}} \times 100\)

  3. 3.

    Sensitivity The percentage of individuals who are alive and have a label of 1 and are correctly predicted as alive.

    \(Sensitivity = \frac{{True\,positive\left( {TP} \right)}}{{True\,positive\left( {TP} \right) + False\,negative(FN)}} \times 100\)

  4. 4.

    Area under the Curve Indicates how well the model can distinguish between class labels and correctly predicts the model for classes 0 and 1.

Results

In this study, modeling was conducted using conventional machine learning algorithms and Deep Neural Network in three distinct approaches. In the initial stage, modeling was executed using all available features extracted from related articles. Subsequently, in the second and third stages, modeling occurred alongside feature selection, taking into account p value and the opinions of 5 oncology experts, respectively. Given that each algorithm was trained with three datasets—Tehran, Shiraz, and a combination of the two—it can be stated that nine models were created for each algorithm. It is noteworthy that all models, except those trained using the combined dataset, underwent evaluation through both cross-validation and external evaluation methods. Based on the recorded evaluation metrics, the highest average accuracy in cross-validation was 94.29%, attributed to models trained with the Shiraz dataset and features selected by oncologists. Additionally, the highest average accuracy in external validation was 76.42%, observed in models trained with the Shiraz dataset and utilizing all features. The maximum average AUC in cross-validation, reaching 0.983, was associated with models trained using the Shiraz dataset and features selected based on the p-value. In external validation, this AUC value was 0.851, achieved by models trained with the Tehran dataset and features selected by oncologists. The subsequent section provides a detailed presentation of the evaluation results.

Evaluation and performance comparison of trained models with all features

According to Table 3, the highest accuracy achieved on the test data in the cross-validation was 95.43%, which belongs to the GBoost and Extra Tree models, which were trained with the Shiraz dataset. Additionally, the highest accuracy level in external validation reached 85.56%, which was achieved using the DNN model. Figure 3 illustrates the architecture of this DNN model. Among the conventional models, the highest accuracy was 81.69%, obtained with the SVM model. Both models were trained with the Shiraz dataset and tested on the Tehran dataset. Figure 4 shows the learning curves of the models that recorded the highest cross or external validation accuracy. The pinnacle AUC levels in cross-validation were attained by XGB and Extra Trees with 0.994 and in external validation by Extra Trees with 0.960. Figure 5 illustrates the ROC curves of models that recorded the highest AUC in cross and external validation.

Table 3 Evaluation results of trained models based on all features.
Fig. 3
figure 3

The Architecture of the DNN model that trained with the Shiraz dataset and showed the highest accuracy in external validation among all 108 trained models.

Fig. 4
figure 4

Learning curves for Extra Trees, SVM, GBoost, and DNN models trained on the Shiraz datasets with all features. The Extra Trees learning curve indicates that the training score does not improve with more training data, but the cross-validation score does. The SVM and GBoost learning curves show that while the training scores decrease with more training data, the cross-validation scores increase. Using all the training data makes the training and cross-validation scores more reasonable and realistic. The DNN learning curve indicates that both the training and cross-validation scores increase with more training data.

Fig. 5
figure 5

ROC curves for Extra Trees performance in cross and external validation and XGB in cross-validation.

Evaluation and performance comparison of trained models with selected features based on P value

In this part, feature selection was done based on two-tailed p value (< 0.05) before modeling. The selected features based on Shiraz, Tehran, and combined datasets are shown in Table 4.

Table 4 Selected features in each dataset based on P value.

According to Table 5, the highest accuracy achieved on the test data in cross-validation was 96.26%, which belongs to the DT model trained with the Shiraz dataset. Moreover, the highest level of accuracy in external validation was 82.89%, and it was obtained using the SVM model, which was trained with the Shiraz dataset and tested on the Tehran dataset. Figure 6 shows the learning curves of the models that recorded the highest cross or external validation accuracy. The highest level of AUC in cross and external validation was obtained using Extra Trees with 0.992 and MLP with 0.944, respectively. Figure 7 illustrates the ROC curves of models that recorded the highest AUC in cross and external validation.

Table 5 Evaluation results of trained models based on selected features (P value).
Fig. 6
figure 6

Learning curves for DT and SVM models trained on the Shiraz datasets with features selected based on p values. The DT learning curve indicates that, despite fluctuations, both the training and cross-validation scores increase with more training data. The SVM learning curve shows that while the training scores decrease with more training data, the cross-validation scores increase, and using all the training data makes the training and cross-validation scores more reasonable and realistic.

Fig. 7
figure 7

ROC curves for MLP performance in external validation and Extra Trees in cross-validation.

Evaluation and performance comparison of trained models with features selected by oncologists

In this part, preceding the modeling process, feature selection was carried out based on a questionnaire completed by five oncology specialists. The questionnaire assigned importance and impact scores to each feature on the survival of breast cancer patients, ranging from 1 to 5 (1: unimportant, 5: very important). Features with an average score greater than or equal to 3 were selected for use in modeling. Figure 8 illustrates the selected features along with their respective average scores. It is noteworthy that the features selected in Shiraz and Tehran and the combined datasets were the same.

Fig. 8
figure 8

Selected features and their average scores determined by oncologists.

As per Table 6, the highest accuracy on the test data in cross-validation reached 95.85%, which is attributed to the GBoost model trained with the Shiraz dataset. Additionally, the highest accuracy in external validation was 81.54%, achieved by the DNN model. The highest accuracy of the conventional machine learning models was 77.82%, obtained with the LR model. Both the DNN and LR models were trained using the Shiraz dataset and tested on the Tehran dataset. Figure 9 shows the learning curves of the models that recorded the highest cross or external validation accuracy. The maximum AUC levels in cross-validation and external validation were attained by RF with 0.992 and LR with 0.972, respectively. Figure 10 illustrates the ROC curves of models that recorded the highest AUC in cross and external validation.

Table 6 Evaluation results of trained models based on features selected by oncologists.
Fig. 9
figure 9

Learning curves for LR, GBoost, and DNN models trained on the Shiraz datasets with features selected by oncologists. The LR and DNN learning curves indicate that despite fluctuations, both the training and cross-validation scores increase with more training data. The GBoost learning curve shows that while the training score decreases with more training data, the cross-validation score increases, and using all the training data makes the training and cross-validation scores more reasonable and realistic.

Fig. 10
figure 10

ROC curves for LR performance in external validation and RF in cross-validation.

Discussion

In this study, the utilization of two datasets significantly mitigates the likelihood of bias associated with single-center studies, an issue often cited in similar research 45. Moreover, each algorithm underwent training three times: once with all features, once with features selected based on p value, and once again with features selected by oncologists. This resulted in the development of a total of 108 models. Subsequently, the performance of conventional machine learning models and DNN was compared.

Certainly, one of the strengths of this study lies in the comprehensive consideration of various variables encompassing tumor characteristics, tumor markers, patient clinical information, patient characteristics, and lifestyle factors. As evident, an essential step before modeling is feature selection, a task accomplished through various methods such as RF, 1NN, KNN, and Cox regression. Beyond these techniques, another approach to feature selection involves consulting experts. Study 42 emphasizes the importance of specialist input in refining the feature set. Initially, their dataset contained 113 features, but after expert consultation, 89 features were discarded. As previously mentioned, in our study, feature selection was conducted not only based on p value but also through a survey involving oncologists. As a result of this expert input, 24 items were selected from the initial set of 32 features. In this study’s overall findings of feature selection methods, common features that emerged as significant include metastasis, recurrence, age at diagnosis, estrogen and progesterone hormone receptors, tumor size, lymph vascular invasion, and the type of surgery performed. It is noteworthy that tumor size, age at diagnosis, hormone receptors, and surgery have consistently been identified as important characteristics in many studies, aligning with the results observed in this investigation46,47,49.

The results reveal that the Extra Trees and GBoost models, trained on all features, achieved the highest cross-validation accuracy at 95.43%. Moreover, the DNN model demonstrated the highest external validation accuracy at 85.56%. This finding is consistent with the results of study45 that both cross and external validation were conducted. According to this study45 the deep neural network model outperformed in all evaluation indicators. In contrast, among our models trained with all features, the XGB model demonstrated the highest AUC in cross-validation, whereas in study 45 there was not a significant performance difference between XGB and other models.

For models trained on features selected based on p value, the DT model achieved the highest cross-validation accuracy at 96.26%. However, this contrasts with study43, which reported negligible performance differences between DT, SVM, and RF models. In external validation, the SVM model’s accuracy of 82.89% supports findings from study41, which also recognized SVM’s strong performance after DNN. The highest amount of AUC among trained models with selected features based on P value in cross and external validations was related to Extra Trees and MLP, respectively. Meanwhile, based on the results of study40, the AUC of the MLP model was not much different from other models.

Regarding models trained on features selected by oncologists, the highest accuracy of cross-validation was 95.85% and was related to the GBoost model. Besides, the highest accuracy of external validation was 81.59% and related to the DNN. This result is consistent with study41, which highlighted DNN’s superior performance in external validation. However, study46, where feature selection was conducted with the assistance of expertise, the RF model demonstrated the highest level of performance accuracy. The highest AUCs for models trained with oncologist-selected features were achieved by RF in cross-validation and LR in external validation, contrasting with the findings of study42, where XGB recorded the highest AUC in cross-validation.

The findings of the current study are in line with study41, which reported the DNN as the best model. Overall, aside from DNN, other studies36,38,43,45 indicated that models based on neural networks such as ANN and MLP, showed better performance. It is noteworthy to say that the mentioned contractions in studies36,42,43,45 could be assigned to utilizing different datasets as well as feature selection approaches.

Study limitations and future considerations

This study encountered several limitations. Firstly, the inaccessibility of patients’ genetic data was a significant constraint. Combining genetic data with other available information could potentially enhance the efficiency of the models. Additionally, the absence of certain aspects of the patients’ medical history, such as blood pressure, blood sugar levels, and other cancers, could notably impact the performance and accuracy of both ML and DL models. Certainly, considering the drugs used in the treatment process would be a valuable addition, bringing the study results closer to the outcomes obtained in real-world scenarios. Future investigations could include using medical images in addition to other forms of data and training deep learning models like CNN. Moreover, developing and comparing metastasis and recurrence prediction models could provide a broader perspective in this field. Furthermore, incorporating online and accessible datasets for external validation could also be practical to enhance the applicability of our findings.

Conclusion

To the best of our knowledge, this study represents a pioneering effort in Iran, being the first to introduce a survival prediction model using deep learning. Leveraging data from two centers and incorporating external validation further distinguishes the study. The results indicate that, overall, the DNN model demonstrated superior prediction accuracy in external validation. This could be because DNN can capture non-linear relationships and interactions among features better than simpler models. However, DNN was not consistently at a higher level in other performance metrics. Moreover, among the conventional models, SVM showed the highest prediction accuracy in external validation. The reason behind this could be that SVM employs kernel functions to transform data into higher dimensions, allowing it to capture complex relationships between features. Notably, evaluation metrics were generally higher for models trained with the Shiraz dataset. This discrepancy might be attributed to the fewer missing values in the Shiraz dataset compared to the Tehran dataset, which was addressed using the KNN algorithm. With an increasing number of similar studies and positive outcomes, there is optimism that ongoing advancements in the field will lead to optimized medical decisions and improved disease prognosis through the utilization of deep learning algorithms to uncover hidden patterns in data.