Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Data
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific data
  3. data descriptors
  4. article
A cyclist-centric 360° panoramic dataset for safety-critical object detection in real-world cycling scenarios
Download PDF
Download PDF
  • Data Descriptor
  • Open access
  • Published: 02 April 2026

A cyclist-centric 360° panoramic dataset for safety-critical object detection in real-world cycling scenarios

  • Han Li1,
  • Liangfeng Chen1,
  • Zheng Wang1,
  • Jinyu Ma1,
  • Ruiqi Xu1 &
  • …
  • Kun Xia1 

Scientific Data , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Abstract

Cycling safety is becoming an increasingly significant challenge for urban transportation, and vision-based object detection technology for cycling scenes has been widely studied. Datasets are an essential component of studying object detection in cycling scenes. However, existing datasets often have limitations, including limited scene diversity and incomplete coverage of safety-critical objects. What is particularly critical is that the data acquisition perspective is not from the perspective of real cyclists, and the limited field of view cannot cover blind spot risks. To address these limitations, we introduce PanoCycle360, a novel 360° panoramic dataset designed for complex cycling scenarios. The dataset is collected using a 360° panoramic camera mounted on the cyclist’s helmet, providing full 360° coverage to eliminate blind spots in cycling and encompassing all common safety-critical objects and real-world cycling scenarios. The dataset comprises 10,055 panoramic images that underwent manual annotation, covering nine high-risk classes, such as E-Bike Rider, Person, Car, Van, Bus, Truck, Cyclist, Cargo Tricycle, and Auto Rickshaw, resulting in 102,171 bounding boxes. We evaluated PanoCycle360’s applicability in single-stage, two-stage, and Transformer-based object detection frameworks across various algorithms with different parameter scales. In combination with the experimental results, it can be concluded that PanoCycle360 enables reliable evaluation in multiple scenarios. The development of the PanoCycle360 dataset holds significant implications for advancing cyclist-centric safety research and developing safety-critical object detection systems for real-world cycling scenarios worldwide.

Data availability

The PanoCycle360 dataset is openly available on Zenodo (https://doi.org/10.5281/zenodo.18993870).

Code availability

The trained model weights and the code used for technical validation are available at https://github.com/Feng-LChen/PanoCycle360. Additionally, a reference script for video frame extraction is provided in the repository.

References

  1. Bobičić, O. & Esztergar-Kiss, D. Enablers and barriers to micromobility adoption: Urban and suburban contexts. Journal of Cleaner Production, 144346, https://doi.org/10.1016/j.jclepro.2024.144346 (2024).

  2. Valenzuela, E. A. et al. Analyzing the behavior and growth of cycling in four north American cities before, during, and after the COVID-19 pandemic. Transportation Research Record 2678(12), 420–433, https://doi.org/10.1177/03611981231157396 (2024).

    Google Scholar 

  3. Scarano, A. et al. Systematic literature review of 10 years of cyclist safety research. Accident Analysis & Prevention 184, 106996, https://doi.org/10.1016/j.aap.2023.106996 (2023).

    Google Scholar 

  4. Goel, R. et al. Effectiveness of road safety interventions: An evidence and gap map. Campbell systematic reviews 20(1), e1367, https://doi.org/10.1002/cl2.1367 (2024).

    Google Scholar 

  5. Segui-Gomez, M. et al. Assessing the impact of the WHO Global Status Reports on Road Safety. Injury Prevention, https://doi.org/10.1136/ip-2024-045536 (2025).

  6. Alnawmasi, N. et al. Exploring temporal instability effects on bicyclist injury severities determinants for intersection and non-intersection-related crashes. Accident Analysis & Prevention 194, 107339, https://doi.org/10.1016/j.aap.2023.107339 (2024).

    Google Scholar 

  7. Lv, X. et al. On safety design of vehicle for protection of vulnerable road users: A review. Thin-Walled Structures 182, 109990, https://doi.org/10.1016/j.tws.2022.109990 (2023).

    Google Scholar 

  8. Komol, M. M. R. et al. Crash severity analysis of vulnerable road users using machine learning. PLoS one 16(8), e0255828, https://doi.org/10.1371/journal.pone.0255828 (2021).

    Google Scholar 

  9. Alai, H. & Rajamani, R. Low-cost camera and 2-D LIDAR fusion for target vehicle corner detection and tracking: Applications to micromobility devices. Mechanical Systems and Signal Processing 206, 110891, https://doi.org/10.1016/j.ymssp.2023.110891 (2024).

    Google Scholar 

  10. Abadi, A. D. et al. Detection of cyclist’s crossing intention based on posture estimation for autonomous driving. IEEE Sensors Journal 23(11), 11274–11284, https://doi.org/10.1109/JSEN.2023.3234153 (2023).

    Google Scholar 

  11. Tang, C. Monocular Cyclist Detection with Convolutional Neural Networks. Preprint at https://doi.org/10.48550/arXiv.2303.11223 (2023).

  12. Kwon, D. et al. A study on development of the camera-based blind spot detection system using the deep learning methodology. Applied Sciences 9(14), 2941, https://doi.org/10.3390/app9142941 (2019).

    Google Scholar 

  13. Zhang, X. et al. A scene comprehensive safety evaluation method based on binocular camera. Robotics and Autonomous Systems 128, 103503, https://doi.org/10.1016/j.robot.2020.103503 (2020).

    Google Scholar 

  14. Gao, S. et al. Review on panoramic imaging and its applications in scene understanding. IEEE Transactions on Instrumentation and Measurement 71, 1–34, https://doi.org/10.1109/TIM.2022.3216675 (2022).

    Google Scholar 

  15. Zhang, D., Yang, T. & Zhao, B. Swin-fisheye: Object detection for fisheye images[J]. IET Image Processing 18(13), 3904–3915, https://doi.org/10.1049/ipr2.13216 (2024).

  16. Ji, S. et al. Panoramic SLAM from a multiple fisheye camera rig. ISPRS Journal of Photogrammetry and Remote Sensing 159, 169–183, https://doi.org/10.1016/j.isprsjprs.2019.11.014 (2020).

    Google Scholar 

  17. Chiang, S. H. et al. Efficient pedestrian detection in top-view fisheye images using compositions of perspective view patches. Image and Vision Computing 105, 104069, https://doi.org/10.1016/j.imavis.2020.104069 (2021).

    Google Scholar 

  18. Zhou, J. et al. Calibrating the Principal Point of Vehicle-Mounted Fisheye Cameras Using Point-Oriented Representation. IEEE Sensors Journal (2025).

  19. Mao, J. et al. 3D object detection for autonomous driving: A comprehensive survey. International Journal of Computer Vision 131.8, 1909–1963, https://doi.org/10.1007/s11263-023-01790-1 (2023).

    Google Scholar 

  20. Wang, K. et al. Performance and challenges of 3D object detection methods in complex scenes for autonomous driving. IEEE Transactions on Intelligent Vehicles 8.2, 1699–1716, https://doi.org/10.1109/TIV.2022.3213796 (2022).

    Google Scholar 

  21. Garcia-Venegas, M. et al. On the safety of vulnerable road users by cyclist detection and tracking. Machine Vision and Applications 32(5), 109, https://doi.org/10.1007/s00138-021-01231-4 (2021).

    Google Scholar 

  22. Li, X. et al. A new benchmark for vision-based cyclist detection, 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE 2016, 1028–1033, https://doi.org/10.1109/IVS.2016.7535515 (2016).

    Google Scholar 

  23. Fu, J., Bajić, I. V. & Vaughan, R. G. Datasets for face and object detection in fisheye images. Data in brief 27, 104752, https://doi.org/10.1016/j.dib.2019.104752 (2019).

    Google Scholar 

  24. Sekkat, A. R. et al. SynWoodScape: Synthetic surround-view fisheye camera dataset for autonomous driving. IEEE Robotics and Automation Letters 7(3), 8502–8509, https://doi.org/10.1109/LRA.2022.3188106 (2022).

    Google Scholar 

  25. Liao, Y., Xie, J. & Geiger, A. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(3), 3292–3310, https://doi.org/10.1109/TPAMI.2022.3179507 (2022).

    Google Scholar 

  26. Sun, P. et al. Scalability in perception for autonomous driving: Waymo open dataset. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2446-2454, https://doi.org/10.1109/CVPR42600.2020.00252 (2020).

  27. Raina, N. et al. EgoBlur: Responsible Innovation in Aria. Preprint at https://arxiv.org/abs/2308.13093 (2023).

  28. Bendiek Laranjo, A. et al. Equirectangular 360° Image Dataset for Detecting Reusable Construction Components. In Proceedings of the 2024 European Conference on Computing in Construction, pp. 542-549, https://doi.org/10.35490/EC3.2024.266 (2024).

  29. Wenke, E. A. et al. Dur360BEV: A Real-World 360-Degree Single Camera Dataset and Benchmark for Bird-Eye View Mapping in Autonomous Driving. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pp. 3737–3744, https://doi.org/10.1109/ICRA55743.2025.11128609 (2025).

  30. Wang, W. Advanced Auto Labeling Solution with Added Features. 2023. Available online: https://github.com/CVHub520/X-AnyLabeling.

  31. Wang, G. et al. M4SFWD: A Multi-Faceted synthetic dataset for remote sensing forest wildfires detection. Expert Systems with Applications 248, 123489, https://doi.org/10.1016/j.eswa.2024.123489 (2024).

    Google Scholar 

  32. Liu, Y. et al. MMFW-UAV dataset: multi-sensor and multi-view fixed-wing UAV dataset for air-to-air vision tasks. Scientific data 12(1), 185, https://doi.org/10.1038/s41597-025-04482-2 (2025).

    Google Scholar 

  33. Li, H. et al. PanoCycle360: A Cyclist-Centric 360° Panoramic Dataset for Safety-Critical Object Detection in Real-World Cycling Scenarios. https://doi.org/10.5281/zenodo.18993870 (2025).

  34. Varghese, R. & Sambath, M. Yolov8: A novel object detection algorithm with enhanced performance and robustness. 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS). IEEE, https://doi.org/10.1109/ADICS58448.2024.10533619 (2024).

  35. Wang, C., Yeh, I. H. & Mark Liao, H. Y. Yolov9: Learning what you want to learn using programmable gradient information. European conference on computer vision. Cham: Springer Nature Switzerland, https://doi.org/10.1007/978-3-031-72751-1_1 (2024).

  36. Wang, A. et al. Yolov10: Real-time end-to-end object detection. Advances in Neural Information Processing Systems 37, 107984–108011 (2024).

    Google Scholar 

  37. Khanam, R. & Muhammad, H. Yolov11: An overview of the key architectural enhancements. Preprint at https://arxiv.org/abs/2410.17725 (2024).

  38. Tian, Y., Ye, Q. & Doermann, D. Yolov12: Attention-centric real-time object detectors. Preprint at https://arxiv.org/abs/2502.12524 (2025).

  39. Ren, S. et al. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence 39.6, 1137–1149, https://doi.org/10.1109/TPAMI.2016.2577031 (2016).

    Google Scholar 

  40. Sun, P. et al. Sparse r-cnn: End-to-end object detection with learnable proposals. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. https://doi.org/10.1109/TPAMI.2023.3292030 (2021).

  41. Cai, Z. & Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. Proceedings of the IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/TPAMI.2019.2956516 (2018).

  42. Duan, K. et al. Centernet: Keypoint triplets for object detection. Proceedings of the IEEE/CVF international conference on computer vision. https://doi.org/10.1109/ICCV.2019.00667 (2019).

  43. Lin, T. Y. et al. Focal loss for dense object detection. Proceedings of the IEEE international conference on computer vision. https://doi.org/10.1109/TPAMI.2018.2858826 (2017).

  44. Zhu, X. et al. Deformable detr: Deformable transformers for end-to-end object detection. Preprint at https://arxiv.org/abs/2010.04159 (2020).

  45. Meng, D. et al. Conditional detr for fast training convergence. Proceedings of the IEEE/CVF international conference on computer vision. https://doi.org/10.1109/ICCV48922.2021.00363 (2021).

  46. Zhang, H. et al. Dino: Detr with improved denoising anchor boxes for end-to-end object detection. Preprint at https://arxiv.org/abs/2203.03605 (2022).

  47. Zhao, Y. et al. Detrs beat yolos on real-time object detection. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16965-16974, https://doi.org/10.48550/arXiv.2304.08069 (2024).

  48. Glenn, J. et al. Ultralytics YOLO. 2025. Available online: https://github.com/ultralytics/ultralytics.

  49. Chen, K. et al. MMDetection: Open mmlab detection toolbox and benchmark. Preprint at https://doi.org/10.48550/arXiv.1906.07155 (2019).

  50. Jin, C. & Chen, X. An end-to-end framework combining time-frequency expert knowledge and modified transformer networks for vibration signal classification. Expert Systems with Applications 171, 114570, https://doi.org/10.1016/j.eswa.2021.114570 (2021).

    Google Scholar 

  51. Islam, S. et al. A comprehensive survey on applications of transformers for deep learning tasks. Expert Systems with Applications 241, 122666, https://doi.org/10.1016/j.eswa.2023.122666 (2024).

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. University of Shanghai for Science and Technology, Shanghai, 200093, China

    Han Li, Liangfeng Chen, Zheng Wang, Jinyu Ma, Ruiqi Xu & Kun Xia

Authors
  1. Han Li
    View author publications

    Search author on:PubMed Google Scholar

  2. Liangfeng Chen
    View author publications

    Search author on:PubMed Google Scholar

  3. Zheng Wang
    View author publications

    Search author on:PubMed Google Scholar

  4. Jinyu Ma
    View author publications

    Search author on:PubMed Google Scholar

  5. Ruiqi Xu
    View author publications

    Search author on:PubMed Google Scholar

  6. Kun Xia
    View author publications

    Search author on:PubMed Google Scholar

Contributions

H.L. conceptualized, designed, and coordinated the study. L.C. drafted the manuscript. L.C. and Z.W. collected the data. L.C., J.M. and K.X. supervised the object detection experiments. L.C., Z.W., J.M. and R.X. were responsible for data labeling. H.L. and K.X. supervised the labeling process.

Corresponding author

Correspondence to Kun Xia.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, H., Chen, L., Wang, Z. et al. A cyclist-centric 360° panoramic dataset for safety-critical object detection in real-world cycling scenarios. Sci Data (2026). https://doi.org/10.1038/s41597-026-07128-z

Download citation

  • Received: 22 October 2025

  • Accepted: 25 March 2026

  • Published: 02 April 2026

  • DOI: https://doi.org/10.1038/s41597-026-07128-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims and scope
  • Editors & Editorial Board
  • Journal Metrics
  • Policies
  • Open Access Fees and Funding
  • Calls for Papers
  • Contact

Publish with us

  • Submission Guidelines
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Data (Sci Data)

ISSN 2052-4463 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing