Abstract
Cycling safety is becoming an increasingly significant challenge for urban transportation, and vision-based object detection technology for cycling scenes has been widely studied. Datasets are an essential component of studying object detection in cycling scenes. However, existing datasets often have limitations, including limited scene diversity and incomplete coverage of safety-critical objects. What is particularly critical is that the data acquisition perspective is not from the perspective of real cyclists, and the limited field of view cannot cover blind spot risks. To address these limitations, we introduce PanoCycle360, a novel 360° panoramic dataset designed for complex cycling scenarios. The dataset is collected using a 360° panoramic camera mounted on the cyclist’s helmet, providing full 360° coverage to eliminate blind spots in cycling and encompassing all common safety-critical objects and real-world cycling scenarios. The dataset comprises 10,055 panoramic images that underwent manual annotation, covering nine high-risk classes, such as E-Bike Rider, Person, Car, Van, Bus, Truck, Cyclist, Cargo Tricycle, and Auto Rickshaw, resulting in 102,171 bounding boxes. We evaluated PanoCycle360’s applicability in single-stage, two-stage, and Transformer-based object detection frameworks across various algorithms with different parameter scales. In combination with the experimental results, it can be concluded that PanoCycle360 enables reliable evaluation in multiple scenarios. The development of the PanoCycle360 dataset holds significant implications for advancing cyclist-centric safety research and developing safety-critical object detection systems for real-world cycling scenarios worldwide.
Data availability
The PanoCycle360 dataset is openly available on Zenodo (https://doi.org/10.5281/zenodo.18993870).
Code availability
The trained model weights and the code used for technical validation are available at https://github.com/Feng-LChen/PanoCycle360. Additionally, a reference script for video frame extraction is provided in the repository.
References
Bobičić, O. & Esztergar-Kiss, D. Enablers and barriers to micromobility adoption: Urban and suburban contexts. Journal of Cleaner Production, 144346, https://doi.org/10.1016/j.jclepro.2024.144346 (2024).
Valenzuela, E. A. et al. Analyzing the behavior and growth of cycling in four north American cities before, during, and after the COVID-19 pandemic. Transportation Research Record 2678(12), 420–433, https://doi.org/10.1177/03611981231157396 (2024).
Scarano, A. et al. Systematic literature review of 10 years of cyclist safety research. Accident Analysis & Prevention 184, 106996, https://doi.org/10.1016/j.aap.2023.106996 (2023).
Goel, R. et al. Effectiveness of road safety interventions: An evidence and gap map. Campbell systematic reviews 20(1), e1367, https://doi.org/10.1002/cl2.1367 (2024).
Segui-Gomez, M. et al. Assessing the impact of the WHO Global Status Reports on Road Safety. Injury Prevention, https://doi.org/10.1136/ip-2024-045536 (2025).
Alnawmasi, N. et al. Exploring temporal instability effects on bicyclist injury severities determinants for intersection and non-intersection-related crashes. Accident Analysis & Prevention 194, 107339, https://doi.org/10.1016/j.aap.2023.107339 (2024).
Lv, X. et al. On safety design of vehicle for protection of vulnerable road users: A review. Thin-Walled Structures 182, 109990, https://doi.org/10.1016/j.tws.2022.109990 (2023).
Komol, M. M. R. et al. Crash severity analysis of vulnerable road users using machine learning. PLoS one 16(8), e0255828, https://doi.org/10.1371/journal.pone.0255828 (2021).
Alai, H. & Rajamani, R. Low-cost camera and 2-D LIDAR fusion for target vehicle corner detection and tracking: Applications to micromobility devices. Mechanical Systems and Signal Processing 206, 110891, https://doi.org/10.1016/j.ymssp.2023.110891 (2024).
Abadi, A. D. et al. Detection of cyclist’s crossing intention based on posture estimation for autonomous driving. IEEE Sensors Journal 23(11), 11274–11284, https://doi.org/10.1109/JSEN.2023.3234153 (2023).
Tang, C. Monocular Cyclist Detection with Convolutional Neural Networks. Preprint at https://doi.org/10.48550/arXiv.2303.11223 (2023).
Kwon, D. et al. A study on development of the camera-based blind spot detection system using the deep learning methodology. Applied Sciences 9(14), 2941, https://doi.org/10.3390/app9142941 (2019).
Zhang, X. et al. A scene comprehensive safety evaluation method based on binocular camera. Robotics and Autonomous Systems 128, 103503, https://doi.org/10.1016/j.robot.2020.103503 (2020).
Gao, S. et al. Review on panoramic imaging and its applications in scene understanding. IEEE Transactions on Instrumentation and Measurement 71, 1–34, https://doi.org/10.1109/TIM.2022.3216675 (2022).
Zhang, D., Yang, T. & Zhao, B. Swin-fisheye: Object detection for fisheye images[J]. IET Image Processing 18(13), 3904–3915, https://doi.org/10.1049/ipr2.13216 (2024).
Ji, S. et al. Panoramic SLAM from a multiple fisheye camera rig. ISPRS Journal of Photogrammetry and Remote Sensing 159, 169–183, https://doi.org/10.1016/j.isprsjprs.2019.11.014 (2020).
Chiang, S. H. et al. Efficient pedestrian detection in top-view fisheye images using compositions of perspective view patches. Image and Vision Computing 105, 104069, https://doi.org/10.1016/j.imavis.2020.104069 (2021).
Zhou, J. et al. Calibrating the Principal Point of Vehicle-Mounted Fisheye Cameras Using Point-Oriented Representation. IEEE Sensors Journal (2025).
Mao, J. et al. 3D object detection for autonomous driving: A comprehensive survey. International Journal of Computer Vision 131.8, 1909–1963, https://doi.org/10.1007/s11263-023-01790-1 (2023).
Wang, K. et al. Performance and challenges of 3D object detection methods in complex scenes for autonomous driving. IEEE Transactions on Intelligent Vehicles 8.2, 1699–1716, https://doi.org/10.1109/TIV.2022.3213796 (2022).
Garcia-Venegas, M. et al. On the safety of vulnerable road users by cyclist detection and tracking. Machine Vision and Applications 32(5), 109, https://doi.org/10.1007/s00138-021-01231-4 (2021).
Li, X. et al. A new benchmark for vision-based cyclist detection, 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE 2016, 1028–1033, https://doi.org/10.1109/IVS.2016.7535515 (2016).
Fu, J., Bajić, I. V. & Vaughan, R. G. Datasets for face and object detection in fisheye images. Data in brief 27, 104752, https://doi.org/10.1016/j.dib.2019.104752 (2019).
Sekkat, A. R. et al. SynWoodScape: Synthetic surround-view fisheye camera dataset for autonomous driving. IEEE Robotics and Automation Letters 7(3), 8502–8509, https://doi.org/10.1109/LRA.2022.3188106 (2022).
Liao, Y., Xie, J. & Geiger, A. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(3), 3292–3310, https://doi.org/10.1109/TPAMI.2022.3179507 (2022).
Sun, P. et al. Scalability in perception for autonomous driving: Waymo open dataset. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2446-2454, https://doi.org/10.1109/CVPR42600.2020.00252 (2020).
Raina, N. et al. EgoBlur: Responsible Innovation in Aria. Preprint at https://arxiv.org/abs/2308.13093 (2023).
Bendiek Laranjo, A. et al. Equirectangular 360° Image Dataset for Detecting Reusable Construction Components. In Proceedings of the 2024 European Conference on Computing in Construction, pp. 542-549, https://doi.org/10.35490/EC3.2024.266 (2024).
Wenke, E. A. et al. Dur360BEV: A Real-World 360-Degree Single Camera Dataset and Benchmark for Bird-Eye View Mapping in Autonomous Driving. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pp. 3737–3744, https://doi.org/10.1109/ICRA55743.2025.11128609 (2025).
Wang, W. Advanced Auto Labeling Solution with Added Features. 2023. Available online: https://github.com/CVHub520/X-AnyLabeling.
Wang, G. et al. M4SFWD: A Multi-Faceted synthetic dataset for remote sensing forest wildfires detection. Expert Systems with Applications 248, 123489, https://doi.org/10.1016/j.eswa.2024.123489 (2024).
Liu, Y. et al. MMFW-UAV dataset: multi-sensor and multi-view fixed-wing UAV dataset for air-to-air vision tasks. Scientific data 12(1), 185, https://doi.org/10.1038/s41597-025-04482-2 (2025).
Li, H. et al. PanoCycle360: A Cyclist-Centric 360° Panoramic Dataset for Safety-Critical Object Detection in Real-World Cycling Scenarios. https://doi.org/10.5281/zenodo.18993870 (2025).
Varghese, R. & Sambath, M. Yolov8: A novel object detection algorithm with enhanced performance and robustness. 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS). IEEE, https://doi.org/10.1109/ADICS58448.2024.10533619 (2024).
Wang, C., Yeh, I. H. & Mark Liao, H. Y. Yolov9: Learning what you want to learn using programmable gradient information. European conference on computer vision. Cham: Springer Nature Switzerland, https://doi.org/10.1007/978-3-031-72751-1_1 (2024).
Wang, A. et al. Yolov10: Real-time end-to-end object detection. Advances in Neural Information Processing Systems 37, 107984–108011 (2024).
Khanam, R. & Muhammad, H. Yolov11: An overview of the key architectural enhancements. Preprint at https://arxiv.org/abs/2410.17725 (2024).
Tian, Y., Ye, Q. & Doermann, D. Yolov12: Attention-centric real-time object detectors. Preprint at https://arxiv.org/abs/2502.12524 (2025).
Ren, S. et al. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence 39.6, 1137–1149, https://doi.org/10.1109/TPAMI.2016.2577031 (2016).
Sun, P. et al. Sparse r-cnn: End-to-end object detection with learnable proposals. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. https://doi.org/10.1109/TPAMI.2023.3292030 (2021).
Cai, Z. & Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. Proceedings of the IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/TPAMI.2019.2956516 (2018).
Duan, K. et al. Centernet: Keypoint triplets for object detection. Proceedings of the IEEE/CVF international conference on computer vision. https://doi.org/10.1109/ICCV.2019.00667 (2019).
Lin, T. Y. et al. Focal loss for dense object detection. Proceedings of the IEEE international conference on computer vision. https://doi.org/10.1109/TPAMI.2018.2858826 (2017).
Zhu, X. et al. Deformable detr: Deformable transformers for end-to-end object detection. Preprint at https://arxiv.org/abs/2010.04159 (2020).
Meng, D. et al. Conditional detr for fast training convergence. Proceedings of the IEEE/CVF international conference on computer vision. https://doi.org/10.1109/ICCV48922.2021.00363 (2021).
Zhang, H. et al. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. Preprint at https://arxiv.org/abs/2203.03605 (2022).
Zhao, Y. et al. Detrs beat yolos on real-time object detection. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16965-16974, https://doi.org/10.48550/arXiv.2304.08069 (2024).
Glenn, J. et al. Ultralytics YOLO. 2025. Available online: https://github.com/ultralytics/ultralytics.
Chen, K. et al. MMDetection: Open mmlab detection toolbox and benchmark. Preprint at https://doi.org/10.48550/arXiv.1906.07155 (2019).
Jin, C. & Chen, X. An end-to-end framework combining time-frequency expert knowledge and modified transformer networks for vibration signal classification. Expert Systems with Applications 171, 114570, https://doi.org/10.1016/j.eswa.2021.114570 (2021).
Islam, S. et al. A comprehensive survey on applications of transformers for deep learning tasks. Expert Systems with Applications 241, 122666, https://doi.org/10.1016/j.eswa.2023.122666 (2024).
Author information
Authors and Affiliations
Contributions
H.L. conceptualized, designed, and coordinated the study. L.C. drafted the manuscript. L.C. and Z.W. collected the data. L.C., J.M. and K.X. supervised the object detection experiments. L.C., Z.W., J.M. and R.X. were responsible for data labeling. H.L. and K.X. supervised the labeling process.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Li, H., Chen, L., Wang, Z. et al. A cyclist-centric 360° panoramic dataset for safety-critical object detection in real-world cycling scenarios. Sci Data (2026). https://doi.org/10.1038/s41597-026-07128-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-026-07128-z