Figure 3 | Scientific Reports

Figure 3

From: Contactless facial video recording with deep learning models for the detection of atrial fibrillation

Figure 3

Study flow diagram. AF atrial fibrillation, DCNN deep convolutional neural network, ECG electrocardiograph, NSR normal sinus rhythm, rPPG remote photoplethysmography. Step 1: Case enrollment and ECG-proved classification. Step 2: Extraction of rPPG signals and dividing them into 30-s segments as the data of for three datasets: “AF vs NSR”, “AF vs Others”, “AF vs Non-AF”. Step 3: Each segment was used as the input of DCNN learning model. For each dataset, tenfold cross validation method was applied to measure the performance of the models with data split into train set (9 folds) and test set (onefold). The procedure was repeated 10 times until all folds had served exactly once as the hold-out set. Eventually, we calculated the average accuracy of the ten folds as the performance of the model and the standard deviation values of model performance between each fold were also calculated. Step 4: Best model algorithms were generated to determine whether or not the 30-s-rPPG segment to be AF. Step 5: Participant with more than 50% of segments determined as AF by the above models was considered positive for AF.

Back to article page