Fig. 3: Sensor-dependent mask predictions and performance evaluations on generalizable multisensory ego-motion estimation under diverse settings. | Nature Machine Intelligence

Fig. 3: Sensor-dependent mask predictions and performance evaluations on generalizable multisensory ego-motion estimation under diverse settings.

From: Deep learning-based robust positioning for all-weather autonomous driving

Fig. 3

a, Illustration of sample frames16, multimodal measurements and the corresponding predicted masks. Each row shows a pair of input measurements and predicted masks of each modality. White and dark regions represent the valid and invalid points in the measurements to effectively capture the multisensory degradation resulting from both adverse weather and inherent sensor deficiencies, respectively. b, Multimodal performance evaluation on ego-motion estimation and multisensory fusion. The box plots show the median, first and third quartiles, as well as the minimum and maximum quartiles to show the errors in motion predictions. The error distribution in motion predictions in terms of the error quartiles are shown for translation and rotation components of motion for each modality. Sensor fusion greatler boosts the overall motion estimation performance.

Back to article page