Fig. 1: PEAN model and dataset.
From: Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis

a PEAN model: after training on pathologists’ slide-reviewing data, the model is capable of both performing a multiclassification task and imitating the pathologists’ slide-reviewing behaviors. b Data distribution of the training dataset, internal testing dataset, and external testing dataset. The color legend representing various diseases is utilized in (c, d). c Total number of patients with different skin conditions in the dataset. d Numbers of slide-reviewing operations performed by the different pathologists. The “Overlap” column includes the images listed for each pathologist. e Images at high magnification showing the ROIs (heatmaps, second row) in which the pathologist’s gaze highly overlaps with the actual tumor tissue (marked in blue in the first row). At lower magnifications, the distribution of the pathologist’s observations approximately corresponds with the actual tumor tissue; more examples are illustrated in Fig. 2b. We also observed that areas on which the pathologists focused more attention typically contained chaotic tumor boundaries. Even at high magnification, manual annotation of scattered tumor cells within these areas is challenging, underscoring one of the advantages of using eye tracking for “visual annotation”. BCC basal cell carcinoma, SCC squamous cell carcinoma, SK seborrheic keratosis.