Figure 3 | Scientific Reports

Figure 3

From: Personalized risk predictor for acute cellular rejection in lung transplant using soluble CD31

Figure 3

Results from the initial bias corrected and weighted temporal network. (ac) Trainable parameter landscapes of the deep temporal network. The plots show the learning progression of our model across epochs and batches. (a) Contour with optimal projected trajectories in blue in the [− 1, 1] range. Those trajectories go from “mountain” values near 1 to a “valley” near − 0.73. From the green levels’ density and blue maps, we can confirm 1 and − 0.73 as the global coordinates to be travelled by the network to arrive in good convergence. (b) Grid projection exhibiting a global maximum collapsing in 1 and a stretcher option for “valleys”. (c) Surface composed of the required weights to achieve the optimal sought by the Adam algorithm, which is scaled by their z scores. (d) Training history of the corrected bias initialization and weighted model. We visualize accuracy across the chosen number of epochs and batches. A global quick look appears to show a standard training scenario with minor issues concerning the validation results in precision and recall panels at the beginning of the training task. (e) Goodness of fit of our weighted deep network during the training task. We calculated its confusion matrix where we could observe only two misclassified patients. However, we accurately detected false negatives compared to the baseline model. (f) Final heatmap associated with the metrics in the learning process of our weighted model. The balance among accuracy, recall and precision values is optimal to prevent missing false negatives. (g) Assessment of our model during the testing validation of our weighted model. We plotted it in terms of the average metrics (left) and scores of learning performance (right) on Cohen Kappa, Kullback Leibler divergence (KL Divergence) or Mean Squared Error. The formers are especially useful because we had imbalanced data. The KL divergence describes the inefficiency caused by the approximation of the true target distribution with the expected target distribution. KLDivergence does not have an upper bound; thus, a score of approximately 2.0 can be interpreted as a moderate or fair measure of how our model estimates the true target distribution.

Back to article page