Table 3 Average of the precision and recall values obtained through different scenarios of the experiments.
From: User preference modeling for movie recommendations based on deep learning
Method | Metrics | Scenario of the experiment | ||
|---|---|---|---|---|
# of Users | # of Movies | # of Recommended items | ||
Proposed method | Avg. Precision | 0.8326 | 0.8235 | 0.8108 |
Avg. Recall | 0.7893 | 0.7249 | 0.6904 | |
Sun et al.32 | Avg. Precision | 0.8298 | 0.8125 | 0.8005 |
Avg. Recall | 0.7755 | 0.7223 | 0.6635 | |
Yadav et al.33 | Avg. Precision | 0.8290 | 0.8126 | 0.8007 |
Avg. Recall | 0.8022 | 0.7223 | 0.7082 | |
Ez-Zahout et al.34 | Avg. Precision | 0.8197 | 0.7970 | 0.7920 |
Avg. Recall | 0.7570 | 0.7065 | 0.6697 | |
Ambikesh et al.35 | Avg. Precision | 0.8251 | 0.8082 | 0.7938 |
Avg. Recall | 0.7602 | 0.7115 | 0.6668 | |
Zubi et al.36 | Avg. Precision | 0.8304 | 0.8198 | 0.7989 |
Avg. Recall | 0.7583 | 0.7090 | 0.6700 | |