Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Real-time rodent behavior classifier using color-based body segmentation (R2C2)

Abstract

The heavy reliance of animal studies on human observers makes them prone to observer-specific biases. To mitigate such shortcomings, much research has been focused on computerizing the characterization of animal behavior. Such automation can lead to more reliable and cost-effective behavior quantifications. Yet, there remain challenges in developing end-to-end solutions that allow users to easily train custom behavioral classifiers with minimal data while maintaining low computational demands. Here we resolve these challenges through a rodent behavior classifier, the real-time rodent behavior classifier using color-based body segmentation (R2C2) algorithm, which uses color-based body segmentation to track rodent body parts and consequently their behaviors. Based on the ‘hue, saturation, value’ (HSV) color difference in furs or exposed skins, the R2C2 creates simple white–black color boundaries for each body part, which are then used to discern and track body parts in real time to extract movement-based features. We combined wavelet transform-based tracking with HSV color-based body part segmentation to substantially reduce computational requirements while minimizing the number of input features needed for classification. Loading these features into our convolutional neural network algorithm, the R2C2 achieves performance on par with an expert human observer. Furthermore, it can differentiate subtle behavioral patterns associated with autism spectrum disorder in mouse models. As the R2C2 is a complete, lightweight end-to-end pipeline package with a graphical user interface and does not require end-user programming or heavy computation resources, it can be easily adopted in conventional neuroscience laboratories. By enabling effective auto-labeling of fine animal actions, R2C2 will facilitate studies aiming to uncover the neural mechanisms driving behavioral modulations.

This is a preview of subscription content, access via your institution

Access options

Fig. 1: Overview of R2C2.
Fig. 2: Operational sequence of the R2C2 method.
Fig. 3: Body part segmentation by HSV color space.
Fig. 4: Robustness of R2C2 tracking in various environments.
Fig. 5: Features extraction from R2C2 tracking.
Fig. 6: Identification of distinct behavioral patterns by primary and wavelet features.
Fig. 7: R2C2 performance for behavior classification.
Fig. 8: Comparison of structures with grouped behavioral motifs.
Fig. 9: Analyzing distribution to verify hidden features of natural behaviors.
Fig. 10: Latency of all steps in the R2C2.

Similar content being viewed by others

Data availability

The data that support the findings of this study are available from the corresponding authors upon request. Source data are provided with this paper.

Code availability

Executable codes, GUI software, user manual and input data with sample video are available via GitHub at https://github.com/KIST-BSI/R2C2.git. To help users who wish to make modifications to the source code, the attached ‘README.md’ file provides detailed guidance on the code structure.

References

  1. Kandel, E. R. et al. Principles of Neural Science Vol. 4 (McGraw-Hill, 2000).

  2. Dell, A. I. et al. Automated image-based tracking and its application in ecology. Trends Ecol. Evol. 29, 417–428 (2014).

    Article  PubMed  Google Scholar 

  3. Datta, S. R., Anderson, D. J., Branson, K., Perona, P. & Leifer, A. Computational neuroethology: a call to action. Neuron 104, 11–24 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Mathis, M. W. & Mathis, A. Deep learning tools for the measurement of animal behavior in neuroscience. Curr. Opin. Neurobiol. 60, 1–11 (2020).

    Article  CAS  PubMed  Google Scholar 

  5. Kwok, R. Deep learning powers a motion-tracking revolution. Nature 574, 137–138 (2019).

    Article  CAS  PubMed  Google Scholar 

  6. Monsees, A. et al. Estimation of skeletal kinematics in freely moving rodents. Nat. Methods 19, 1500–1509 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Usman, M. & Zhong, J. Skeleton-based motion prediction: a survey. Front. Comput. Neurosci. 16, 953919 (2022).

    Article  Google Scholar 

  8. Mathis, A. et al. DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289 (2018).

    Article  CAS  PubMed  Google Scholar 

  9. Graving, J. M. et al. DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. eLife 8, e47994 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Pereira, T. D. et al. Fast animal pose estimation using deep neural networks. Nat. Methods 16, 117–125 (2019).

    Article  CAS  PubMed  Google Scholar 

  11. Schweihoff, J. F. et al. DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection. Commun. Biol. 4, 130 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Sehara, K., Zimmer-Harwood, P., Larkum, M. E. & Sachdev, R. N. S. Real-time closed-loop feedback in behavioral time scales using DeepLabCut. eNeuro 8, ENEURO.0415–0420.2021 (2021).

    Article  PubMed  Google Scholar 

  13. Kane, G. A., Lopes, G., Saunders, J. L., Mathis, A. & Mathis, M. W. Real-time, low-latency closed-loop feedback using markerless posture tracking. eLife 9, e61909 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Hsu, A. I. & Yttri, E. A. B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors. Nat. Commun. 12, 5188 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Tillmann, J. F., Hsu, A. I., Schwarz, M. K. & Yttri, E. A. A-SOiD, an active-learning platform for expert-guided, data-efficient discovery of behavior. Nat. Methods 21, 703–711 (2024).

    Article  CAS  PubMed  Google Scholar 

  16. Nath, T. et al. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat. Protoc. 14, 2152–2176 (2019).

    Article  CAS  PubMed  Google Scholar 

  17. Wiltschko, A. B. et al. Revealing the structure of pharmacobehavioral space through motion sequencing. Nat. Neurosci. 23, 1433–1443 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Kabra, M., Robie, A. A., Rivera-Alba, M., Branson, S. & Branson, K. JAABA: interactive machine learning for automatic annotation of animal behavior. Nat. Methods 10, 64–67 (2013).

    Article  CAS  PubMed  Google Scholar 

  19. Goodwin, N. L. et al. Simple Behavioral Analysis (SimBA) as a platform for explainable machine learning in behavioral neuroscience. Nat. Neurosci. 27, 1411–1424 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Segalin, C. et al. The Mouse Action Recognition System (MARS) software pipeline for automated analysis of social behaviors in mice. eLife 10, e63720 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. de Chaumont, F. et al. Real-time analysis of the behaviour of groups of mice via a depth-sensing camera and machine learning. Nat. Biomed. Eng. 3, 930–942 (2019).

    Article  PubMed  Google Scholar 

  22. Dankert, H., Wang, L., Hoopfer, E. D., Anderson, D. J. & Perona, P. Automated monitoring and analysis of social behavior in Drosophila. Nat. Methods 6, 297–303 (2009).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Sturman, O. et al. Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology 45, 1942–1952 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  24. Anderson, D. J. & Perona, P. Toward a science of computational ethology. Neuron 84, 18–31 (2014).

    Article  CAS  PubMed  Google Scholar 

  25. Brown, A. & Bivort, B. Ethology as a physical science. Nat. Phys. 14, 653–657 (2018).

    Article  CAS  Google Scholar 

  26. Egnor, S. E. & Branson, K. Computational analysis of behavior. Annu. Rev. Neurosci. 39, 217–236 (2016).

    Article  CAS  PubMed  Google Scholar 

  27. Gomez-Marin, A., Paton, J. J., Kampff, A. R., Costa, R. M. & Mainen, Z. F. Big behavioral data: psychology, ethology and the foundations of neuroscience. Nat. Neurosci. 17, 1455–1462 (2014).

    Article  CAS  PubMed  Google Scholar 

  28. Flores-Vidal, P., Gómez, D., Castro, J. & Montero, J. A new edge detection method based on global evaluation using supervised classification algorithms. Int. J. Comput. Intell. Syst. 12, 367–378 (2018).

    Article  Google Scholar 

  29. Cucchiara, R., Grana, C., Piccardi, M., Prati, A. & Sirotti, S. Improving shadow suppression in moving object detection with HSV color information. In Proc. 2001 IEEE Intelligent Transportation Systems 334–339 (IEEE, 2001).

  30. Shuhua, L. & Gaizhi, G. The application of improved HSV color space model in image processing. In 2010 2nd International Conference on Future Computer and Communication Vol. 2, V2-10–V2-13 (IEEE, 2010).

  31. Mazzeo, P. L., Giove, L., Moramarco, G. M., Spagnolo, P. & Leo, M. HSV and RGB color histograms comparing for objects tracking among non-overlapping FOVs, using CBTF. In Proc. 2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) 498–503 (IEEE, 2011).

  32. Saravanakumar, S., Vadivel, A. & Saneem Ahmed, C. G. Object tracking in video by egg shape boundary model and properties of HSV colour space. Int. J. Multimed. Intell. Secur. 2, 269–295 (2011).

  33. Sebastian, P., Voon, Y. V. & Comley, R. The effect of colour space on tracking robustness. In Proc. 2008 3rd IEEE Conference on Industrial Electronics and Applications 2512–2516 (IEEE, 2008).

  34. Bradski, G. & Kaehler, A. Learning OpenCV: Computer Vision with the OpenCV Library (O’Reilly Media, 2008).

  35. Oliveira, V. A. & Conci, A. Skin detection using HSV color space. In Workshops of SIBGRAPI 2009Posters (eds Pedrini, H. & Marques de Carvalho, J.) 1–2 (SBC, 2009).

  36. Mordvintsev, A. & Abid, K. OpenCV Python tutorials (version 4.10, en-stable). OpenCV Foundation https://opencv-python-tutorials.readthedocs.io/en/stable/ (2024).

  37. Liu, J. & Zhong, X. An object tracking method based on Mean Shift algorithm with HSV color space and texture features. Cluster Comput. 22, 6079–6090 (2019).

    Article  Google Scholar 

  38. Bohnslav, J. P. et al. DeepEthogram, a machine learning pipeline for supervised behavior classification from raw pixels. eLife 10, e63377 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Nashaat, M. A. et al. Pixying behavior: a versatile real-time and post hoc automated optical tracking method for freely moving and head-fixed animals. eNeuro 4, ENEURO.0248-17.2017 (2017).

  40. Adeli, H., Zhou, Z. & Dadmehr, N. Analysis of EEG records in an epileptic patient using wavelet transform. J. Neurosci. Methods 123, 69–87 (2003).

    Article  PubMed  Google Scholar 

  41. Issartel, J., Marin, L., Gaillot, P., Bardainne, T. & Cadopi, M. A practical guide to time–frequency analysis in the study of human motor behavior: the contribution of wavelet transform. J. Motor Behav. 38, 139–159 (2006).

    Article  Google Scholar 

  42. Quotb, A., Bornat, Y. & Renaud, S. Wavelet transform for real-time detection of action potentials in neural signals. Front. Neuroeng. 4, 7 (2011).

    Article  PubMed  PubMed Central  Google Scholar 

  43. Spink, A. J., Tegelenbosch, R. A. J., Buma, M. O. S. & Noldus, L. P. J. J. The EthoVision video tracking system—a tool for behavioral phenotyping of transgenic mice. Physiol. Behav. 73, 731–744 (2001).

    Article  CAS  PubMed  Google Scholar 

  44. Noldus, L. P., Spink, A. J. & Tegelenbosch, R. A. J. B. R. M. Instruments, & Computers EthoVision: a versatile video tracking system for automation of behavioral experiments. Behav. Res. Methods Instrum. Comput. 33, 398–414 (2001).

    Article  CAS  PubMed  Google Scholar 

  45. Kiranyaz, S. et al. 1D convolutional neural networks and applications: a survey. Mech. Syst. Signal Process. 151, 107398 (2021).

    Article  Google Scholar 

  46. Prechelt, L. in Neural Networks: Tricks of the Trade (eds Orr, G. B. & Müller, K.-R.) 55–69 (Springer, 1998).

  47. Jun, L., Longnian, L. & Dong, V. W. Representation of fear of heights by basolateral amygdala neurons. J. Neurosci. 41, 1080 (2021).

    Article  Google Scholar 

  48. Luxem, K. et al. Open-source tools for behavioral video analysis: setup, methods, and best practices. eLife 12, e79305 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  49. Hewitt, B., Yap, M. H. & Grant, R. A. Manual Whisker Annotator (MWA): a modular open-source tool. J. Open Res. Softw. 4, e4 (2016).

  50. Arac, A., Zhao, P., Dobkin, B. H., Carmichael, S. T. & Golshani, P. DeepBehavior: a deep learning toolbox for automated analysis of animal and human behavior imaging data. Front. Syst. Neurosci. 13, 20 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  51. Baccouche, M., Mamalet, F., Wolf, C., Garcia, C. & Baskurt, A. Sequential deep learning for human action recognition. In Proc. International Workshop on Human Behavior Understanding 29–39 (Springer, 2011).

  52. Dalal, N. & Triggs, B. Histograms of oriented gradients for human detection. In Proc. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Vol. 1, 886–893 (IEEE, 2005).

  53. Xu, R., Nikouei, S. Y., Chen, Y., Polunchenko, A., Song, S., Deng, C. & Faughnan, T. R. Real-time human object tracking for smart surveillance at the edge. In Proc. 2018 IEEE International Conference on Communications (ICC) 1–6 (IEEE, 2018).

  54. Toshev, A. & Szegedy, C. DeepPose: human pose estimation via deep neural networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 1653–1660 (IEEE, 2014).

  55. Wei, S.-E., Ramakrishna, V., Kanade, T. & Sheikh, Y. Convolutional pose machines. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4724–4732 (IEEE, 2016).

  56. Pishchulin, L., Insafutdinov, E., Tang, S., Andriluka, M., Leibe, B. & Schiele, B. DeepCut: joint subset partition and labeling for multi-person pose estimation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4929–4937 (IEEE, 2016).

  57. Feichtenhofer, C., Pinz, A. & Zisserman, A. Detect to track and track to detect. In Proc. IEEE International Conference on Computer Vision (ICCV) 3038–3046 (IEEE, 2017).

  58. Insafutdinov, E., Pishchulin, L., Andriluka, M. & Schiele, B. ArtTrack: articulated multi-person tracking in the wild. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 6457–6465 (IEEE, 2017).

  59. Stewart, R., Andriluka, M. & Ng, A. Y. End-to-end people detection in crowded scenes. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2325–2333 (IEEE, 2016).

  60. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the Inception architecture for computer vision. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2818–2826 (IEE, 2016).

  61. Gerós, A., Magalhães, A. & Aguiar, P. Improved 3D tracking and automated classification of rodents’ behavioral activity using depth-sensing cameras. Behav. Res. Methods 52, 2156–2167 (2020).

  62. Sturman, O., von Ziegler, L., Schläppi, C. et al. Deep learning-based behavioral analysis reaches human accuracy and is capable of outperforming commercial solutions. Neuropsychopharmacology 45, 1942–1952 (2020).

  63. Kobayashi, K. et al. Automated detection of mouse scratching behaviour using convolutional recurrent neural network. Sci. Rep. 11, 658 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  64. Speed, H. E. et al. Autism-associated insertion mutation (InsG) of Shank3 exon 21 causes impaired synaptic transmission and behavioral deficits. J. Neurosci. 35, 9648–9665 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  65. Peñagarikano, O. et al. Absence of CNTNAP2 leads to epilepsy, neuronal migration abnormalities, and core autism-related deficits. Cell 147, 235–246 (2011).

    Article  PubMed  PubMed Central  Google Scholar 

  66. Zhou, Y. et al. Mice with Shank3 mutations associated with ASD and schizophrenia display both shared and distinct defects. Neuron 89, 147–162 (2016).

    Article  CAS  PubMed  Google Scholar 

  67. Arts, L. P. A. & van den Broek, E. L. The fast continuous wavelet transformation (fCWT) for real-time, high-quality, noise-resistant time–frequency analysis. Nat. Comput. Sci. 2, 47–58 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  68. Kuang, X., Wang, F., Hernandez, K. M., Zhang, Z. & Grossman, R. L. Accurate and rapid prediction of tuberculosis drug resistance from genome sequence data using traditional machine learning algorithms and CNN. Sci. Rep. 12, 2427 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  69. Zhang, Q., Barri, K., Babanajad, S. K. & Alavi, A. H. Real-time detection of cracks on concrete bridge decks using deep learning in the frequency domain. Engineering 7, 1786–1796 (2021).

    Article  Google Scholar 

  70. Bradski, G. The openCV library. Dr Dobbs J. 25, 120–123 (2000).

    Google Scholar 

  71. Abed, A. & Rahman, S. Python-based Raspberry Pi for hand gesture recognition. Int. J. Comput. Appl. 173, 975–8887 (2017).

    Google Scholar 

  72. Berridge, K. C., Aldridge, J. W., Houchard, K. R. & Zhuang, X. Sequential super-stereotypy of an instinctive fixed action pattern in hyper-dopaminergic mutant mice: a model of obsessive compulsive disorder and Tourette’s. BMC Biol. 3, 1–16 (2005).

    Article  Google Scholar 

  73. Powers, D. M. W. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol. 2, 37–63 (2011).

    Google Scholar 

Download references

Acknowledgements

We thank H. S. Choi for the discussion and valuable comments and G. Feng for providing SHANK3 TG mice. This work was supported by National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (grant nos. 2019M3E5D2A01058329, RS-2023-00208692, RS-2024-00398768 and RS-2024-00444714) and by the Korea Institute of Science and Technology Institutional Programs (grant nos. 2E29222, 2E33711, 2E33681 and 2E33731).

Author information

Authors and Affiliations

Authors

Contributions

J. Kim and C.L. conceived and supervised all aspects of the study. J. Kim, C.L., J.A.J. and T.P. contributed to algorithm development and designed the experiments. J.A.J., T.P., S.H., S.L. and J.M. performed experiments and analyzed and interpreted the data. G.P. performed animal behavioral labeling and statistical analysis. M.S., J. Kwag and H.K. provided technical consultation for behavioral classification and computation tools and supervised deep learning and model implementation. All authors participated in writing or reviewing paper and approved the final paper.

Corresponding authors

Correspondence to Changhyuk Lee or Jeongjin Kim.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Lab Animal thanks Alvaro Rodriguez, Eric Yttri and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information (download PDF )

Supplementary Figs. 1–9 and Table 1.

Reporting Summary (download PDF )

Supplementary Video 1 (download MP4 )

Representative video illustrating the R2C2 GUI workflow. The video demonstrates defining HSV ranges for ‘body’ and ‘ear’ binary masks, extracting primary and wavelet features and performing behavior classification using feature files and 1D-CNN models.

Supplementary Video 2 (download MP4 )

Representative video of body and ear tracking in R2C2. Trajectories indicate the body center (ROI centroid) and ear centroid.

Supplementary Video 3 (download MP4 )

Representative video of behavioral categories. The video shows four labeled behaviors—grooming, rearing, walking and sniffing—with all others grouped as ‘Other’. This categorization was used to train the 1D-CNN model.

Source data

Source Data Fig. 4 (download XLS )

Source data for graphs.

Source Data Fig. 5 (download XLS )

Source data for visualizing features.

Source Data Fig. 6 (download XLS )

Source data for visualizing features.

Source Data Fig. 7 (download XLS )

Source data for graphs.

Source Data Fig. 8 (download XLS )

Source data for graphs.

Source Data Fig. 9 (download XLSX )

Source data for graphs.

Source Data Fig. 10 (download XLS )

Source data for graphs.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jo, J.A., Park, T., Hwang, S. et al. Real-time rodent behavior classifier using color-based body segmentation (R2C2). Lab Anim 54, 321–334 (2025). https://doi.org/10.1038/s41684-025-01634-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s41684-025-01634-0

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing