Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Interactive wearable digital devices for blind and partially sighted people

Abstract

Digital information has permeated all aspects of life, and diverse forms of information exert a profound influence on social interactions and cognitive perceptions. In contrast to the flourishing of digital interaction devices for sighted users, the needs of blind and partially sighted users for digital interaction devices have not been adequately addressed. Current assistive devices often cause frustration in blind and partially sighted users owing to the limited efficiency and reliability of information delivery and the high cognitive load associated with their use. The expected rise in the prevalence of blindness and visual impairment due to global population ageing drives an urgent need for assistive devices that can deliver information effectively and non-visually, and thereby overcome the challenges faced by this community. This Perspective presents three potential directions in assistive device design: multisensory learning and integration; gestural interaction control; and the synchronization of tactile feedback with large-scale visual language models. Future trends in assistive devices for use by blind and partially sighted people are also explored, focusing on metrics for text delivery efficiency and the enhancement of image content delivery. Such devices promise to greatly enrich the lives of blind and partially sighted individuals in the digital age.

This is a preview of subscription content, access via your institution

Access options

Buy this article

USD 39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Current wearable assistive devices for blind and partially sighted people.
Fig. 2: Current methods of interaction with digital information used by blind and partially sighted people.
Fig. 3: Cognitive loads of assistive technologies used by blind and partially sighted people.
Fig. 4: Reading efficiency of assistive technologies.
Fig. 5: Types of image information content presented by assistive wearable devices.

Similar content being viewed by others

References

  1. World Health Organization. Blindness and vision impairment. WHO https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (2023).

  2. Bourne, R. et al. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study. Lancet Glob. Health 9, e130–e143 (2021).

    Article  Google Scholar 

  3. Kim, N. W., Joyner, S. C., Riegelhuth, A. & Kim, Y. Accessible visualization: design space, opportunities, and challenges. Comput. Graph. Forum 40, 173–188 (2021).

    Article  Google Scholar 

  4. Choi, J., Jung, S., Park, D. G., Choo, J. & Elmqvist, N. Visualizing for the non‐visual: enabling the visually impaired to use visualization. Comput. Graph. Forum 38, 249–260 (2019).

    Article  Google Scholar 

  5. Morrison, J. H. & Baxter, M. G. The ageing cortical synapse: hallmarks and implications for cognitive decline. Nat. Rev. Neurosci. 13, 240–250 (2012).

    Article  Google Scholar 

  6. Decorps, J., Saumet, J. L., Sommer, P., Sigaudo-Roussel, D. & Fromy, B. Effect of ageing on tactile transduction processes. Ageing Res. Rev. 13, 90–99 (2014).

    Article  Google Scholar 

  7. Price, D. et al. Age-related delay in visual and auditory evoked responses is mediated by white- and grey-matter differences. Nat. Commun. 8, 15671 (2017).

    Article  Google Scholar 

  8. Maurer, D., Lewis, T. & Mondloch, C. Missing sights: consequences for visual cognitive development. Trends Cogn. Sci. 9, 144–151 (2005).

    Article  Google Scholar 

  9. Maurer, D., Mondloch, C. J. & Lewis, T. L. Effects of early visual deprivation on perceptual and cognitive development. Prog. Brain Res. 164, 87–104 (2007).

    Article  Google Scholar 

  10. Schaadhardt, A., Hiniker, A. & Wobbrock, J. O. Understanding blind screen-reader users’ experiences of digital artboards. In Proc. 2021 CHI Conference on Human Factors in Computing Systems 1–19 (ACM, 2021).

  11. Salisbury, E., Kamar, E. & Morris, M. Toward scalable social alt text: conversational crowdsourcing as a tool for refining vision-to-language technology for the blind. In Proc. AAAI Conference on Human Computation and Crowdsourcing Vol. 5, 147–156 (AAAI, 2017).

  12. Zong, J. et al. Rich screen reader experiences for accessible data visualization. Comput. Graph. Forum 41, 15–27 (2022).

    Article  Google Scholar 

  13. Morris, M. R., Johnson, J., Bennett, C. L. & Cutrell, E. Rich representations of visual content for screen reader users. In Proc. 2018 CHI Conference on Human Factors in Computing Systems 1–11 (ACM, 2018).

  14. Ates, H. C. et al. End-to-end design of wearable sensors. Nat. Rev. Mater. 7, 887–907 (2022).

    Article  MathSciNet  Google Scholar 

  15. Liu, Z. et al. Functionalized fiber-based strain sensors: pathway to next-generation wearable electronics. Nano-Micro Lett. 14, 61 (2022).

    Article  Google Scholar 

  16. Yin, R., Wang, D., Zhao, S., Lou, Z. & Shen, G. Wearable sensors-enabled human–machine interaction systems: from design to application. Adv. Funct. Mater. 31, 2008936 (2021).

    Article  Google Scholar 

  17. Shen, S. et al. Human machine interface with wearable electronics using biodegradable triboelectric films for calligraphy practice and correction. Nano-Micro Lett. 14, 225 (2022).

    Article  Google Scholar 

  18. Lin, M., Hu, H., Zhou, S. & Xu, S. Soft wearable devices for deep-tissue sensing. Nat. Rev. Mater. 7, 850–869 (2022).

    Article  Google Scholar 

  19. Seneviratne, S. et al. A survey of wearable devices and challenges. IEEE Commun. Surv. Tuts 19, 2573–2620 (2017).

    Article  Google Scholar 

  20. Yang, J. C. et al. Electronic skin: recent progress and future prospects for skin-attachable devices for health monitoring, robotics, and prosthetics. Adv. Mater. 31, 1904765 (2019).

    Article  Google Scholar 

  21. Wang, Z., Liu, Y., Zhou, Z., Chen, P. & Peng, H. Towards integrated textile display systems. Nat. Rev. Electr. Eng. 1, 466–477 (2024).

    Article  Google Scholar 

  22. Shi, X. et al. Large-area display textiles integrated with functional systems. Nature 591, 240–245 (2021).

    Article  Google Scholar 

  23. Han, S. et al. Battery-free, wireless sensors for full-body pressure and temperature mapping. Sci. Transl. Med. 10, eaan4950 (2018).

    Article  Google Scholar 

  24. Yang, Y. et al. Artificial intelligence-enabled detection and assessment of Parkinson’s disease using nocturnal breathing signals. Nat. Med. 28, 2207–2215 (2022).

    Article  Google Scholar 

  25. Sardini, E., Serpelloni, M. & Pasqui, V. Wireless wearable T-shirt for posture monitoring during rehabilitation exercises. IEEE Trans. Instrum. Meas. 64, 439–448 (2015).

    Article  Google Scholar 

  26. Malasinghe, L. P., Ramzan, N. & Dahal, K. Remote patient monitoring: a comprehensive study. J. Ambient. Intell. Hum. Comput. 10, 57–76 (2019).

    Article  Google Scholar 

  27. Zu, L. et al. A self-powered early warning glove with integrated elastic-arched triboelectric nanogenerator and flexible printed circuit for real-time safety protection. Adv. Mater. Technol. 7, 2100787 (2022).

    Article  Google Scholar 

  28. Sun, Z., Zhu, M., Shan, X. & Lee, C. Augmented tactile-perception and haptic-feedback rings as human–machine interfaces aiming for immersive interactions. Nat. Commun. 13, 5224 (2022).

    Article  Google Scholar 

  29. Zhu, M., Sun, Z. & Lee, C. Soft modular glove with multimodal sensing and augmented haptic feedback enabled by materials’ multifunctionalities. ACS Nano 16, 14097–14110 (2022).

    Article  Google Scholar 

  30. Zhou, Q., Ji, B., Hu, F., Luo, J. & Zhou, B. Magnetized micropillar-enabled wearable sensors for touchless and intelligent information communication. Nano-Micro Lett. 13, 197 (2021).

    Article  Google Scholar 

  31. Slade, P., Tambe, A. & Kochenderfer, M. J. Multimodal sensing and intuitive steering assistance improve navigation and mobility for people with impaired vision. Sci. Robot. 6, eabg6594 (2021).

    Article  Google Scholar 

  32. Mai, C. et al. Laser sensing and vision sensing smart blind cane: a review. Sensors 23, 869 (2023).

    Article  Google Scholar 

  33. Tapu, R., Mocanu, B. & Zaharia, T. Wearable assistive devices for visually impaired: a state of the art survey. Pattern Recognit. Lett. 137, 37–52 (2020).

    Article  Google Scholar 

  34. Yang, C. et al. LightGuide: directing visually impaired people along a path using light cues. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 1-27 (2021).

    Article  Google Scholar 

  35. Kim, E. H. et al. Interactive skin display with epidermal stimuli electrode. Adv. Sci. 6, 1802351 (2019).

    Article  Google Scholar 

  36. Ehrlich, J. R. et al. Head-mounted display technology for low-vision rehabilitation and vision enhancement. Am. J. Ophthalmol. 176, 26–32 (2017).

    Article  Google Scholar 

  37. Jafri, R. & Ali, S. A. Exploring the potential of eyewear-based wearable display devices for use by the visually impaired. In 2014 3rd International Conference on User Science and Engineering (i-USEr) 119–124 (IEEE, 2014).

  38. Jensen, L. & Konradsen, F. A review of the use of virtual reality head-mounted displays in education and training. Educ. Inf. Technol. 23, 1515–1529 (2018).

    Article  Google Scholar 

  39. Mauricio, C., Domingues, G., Padua, I., Peres, F. & Teixeira, J. in Symp. Virtual and Augmented Reality 203–212 (ACM, 2024).

  40. Chew, Y. C. D. & Walker, B. N. What did you say?: visually impaired students using bonephones in math class. In Proc. 15th International ACM SIGACCESS Conference on Computers and Accessibility 1–2 (ACM, 2013).

  41. Deemer, A. D. et al. Low vision enhancement with head-mounted video display systems: are we there yet? Optom. Vis. Sci. 95, 694–703 (2018).

    Article  Google Scholar 

  42. Htike, H. M., Margrain, T. H., Lai, Y.-K. & Eslambolchilar, P. Ability of head-mounted display technology to improve mobility in people with low vision: a systematic review. Transl. Vis. Sci. Technol. 9, 26 (2020).

    Article  Google Scholar 

  43. Li, Y. et al. A scoping review of assistance and therapy with head-mounted displays for people who are visually impaired. ACM Trans. Access. Comput. 15, 1–28 (2022).

    Article  Google Scholar 

  44. Wang, Y. et al. Multiscale haptic interfaces for metaverse. Device 2, 100326 (2024).

    Article  Google Scholar 

  45. Twyman, M., Mullenbach, J., Shultz, C., Colgate, J. E. & Piper, A. M. Designing wearable haptic information displays for people with vision impairments. In Proc. 9th International Conference on Tangible, Embedded, and Embodied Interaction (TEI) 341–344 (ACM, 2015).

  46. Yang, W. et al. A survey on tactile displays for visually impaired people. IEEE Trans. Haptics 14, 712–721 (2021).

    Article  Google Scholar 

  47. Qu, X. et al. Refreshable braille display system based on triboelectric nanogenerator and dielectric elastomer. Adv. Funct. Mater. 31, 2006612 (2021).

    Article  Google Scholar 

  48. Kim, J. et al. Braille display for portable device using flip-latch structured electromagnetic actuator. IEEE Trans. Haptics 13, 59–65 (2020).

    Article  Google Scholar 

  49. Ray, A., Ghosh, S. & Neogi, B. An overview on tactile display, haptic investigation towards beneficial for blind person. Int. J. Eng. Sci. Technol. Res. 6, 88 (2015).

    Google Scholar 

  50. Mukhiddinov, M. & Kim, S.-Y. A systematic literature review on the automatic creation of tactile graphics for the blind and visually impaired. Processes 9, 1726 (2021).

    Article  Google Scholar 

  51. Holloway, L. et al. Animations at your fingertips: using a refreshable tactile display to convey motion graphics for people who are blind or have low vision. In Proc. 24th International ACM SIGACCESS Conference on Computers and Accessibility 1–16 (ACM, 2022).

  52. Kim, J. et al. Ultrathin quantum dot display integrated with wearable electronics. Adv. Mater. 29, 1700217 (2017).

    Article  Google Scholar 

  53. Luo, Y. et al. Programmable tactile feedback system for blindness assistance based on triboelectric nanogenerator and self-excited electrostatic actuator. Nano Energy 111, 108425 (2023).

    Article  Google Scholar 

  54. Jang, S.-Y. et al. Dynamically reconfigurable shape-morphing and tactile display via hydraulically coupled mergeable and splittable PVC gel actuator. Sci. Adv. 10, eadq2024 (2024).

    Article  Google Scholar 

  55. Zhang, Z. J. & Wobbrock, J. O. A11yBoard: making digital artboards accessible to blind and low-vision users. In Proc. 2023 CHI Conference on Human Factors in Computing Systems 1–17 (ACM, 2023).

  56. Melfi, G., Müller, K., Schwarz, T., Jaworek, G. & Stiefelhagen, R. Understanding what you feel: a mobile audio-tactile system for graphics used at schools with students with visual impairment. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 1–12 (ACM, 2020).

  57. Siu, A. F. et al. In Design Thinking Research: Translation, Prototyping, and Measurement (eds Meinel, C. & Leifer, L.) 167–180 (Springer, 2021).

  58. Lee, J., Herskovitz, J., Peng, Y.-H. & Guo, A. ImageExplorer: multil-layered touch exploration to encourage skepticism towards imperfect AI-generated image captions. In CHI Conference on Human Factors in Computing Systems 1–15 (ACM, 2022).

  59. Liu, H. et al. HIDA: towards holistic indoor understanding for the visually impaired via semantic instance segmentation with a wearable solid-state LiDAR sensor. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 1780–1790 (IEEE, 2021).

  60. Wang, H.-C. et al. Enabling independent navigation for visually impaired people through a wearable vision-based feedback system. In 2017 IEEE International Conference on Robotics and Automation (ICRA) 6533–6540 (IEEE, 2017).

  61. Chen, Z., Liu, X., Kojima, M., Huang, Q. & Arai, T. A wearable navigation device for visually impaired people based on the real-time semantic visual SLAM system. Sensors 21, 1536 (2021).

    Article  Google Scholar 

  62. Wang, R. et al. Revamp: enhancing accessible information seeking experience of online shopping for blind or low vision users. In Proc. 2021 CHI Conference on Human Factors in Computing Systems 1–14 (ACM, 2021).

  63. Zhang, M. et al. AccessFixer: enhancing GUI accessibility for low vision users with R-GCN model. IEEE Trans. Softw. Eng. 50, 173–189 (2024).

    Article  Google Scholar 

  64. Gleason, C. et al. Twitter A11y: a browser extension to make twitter images accessible. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 1–12 (ACM, 2020).

  65. Sharif, A., Chintalapati, S. S., Wobbrock, J. O. & Reinecke, K. Understanding screen-reader users’ experiences with online data visualizations. In 23rd International ACM SIGACCESS Conference on Computers and Accessibility 1–16 (ACM, 2021).

  66. Billah, S. M., Ashok, V., Porter, D. E. & Ramakrishnan, I. V. Ubiquitous accessibility for people with visual impairments: are we there yet? In Proc. 2017 CHI Conference on Human Factors in Computing Systems 5862–5868 (ACM, 2017).

  67. Thompson, J. R., Martinez, J. J., Sarikaya, A., Cutrell, E. & Lee, B. Chart reader: accessible visualization experiences designed with screen reader users. In Proc. 2023 CHI Conference on Human Factors in Computing Systems 1–18 (ACM, 2023).

  68. Gorniak, J., Kim, Y., Wei, D. & Kim, N. W. VizAbility: enhancing chart accessibility with LLM-based conversational interaction. In Proc. 37th Annual ACM Symposium on User Interface Software and Technology 1–19 (ACM, 2024).

  69. Moured, O. et al. Chart4Blind: an intelligent interface for chart accessibility conversion. In Proc. 29th International Conference on Intelligent User Interfaces 504–514 (ACM, 2024).

  70. Pundlik, S., Shivshanker, P. & Luo, G. Impact of apps as assistive devices for visually impaired persons. Annu. Rev. Vis. Sci. 9, 111–130 (2023).

    Article  Google Scholar 

  71. Wang, Y., Zhao, Y. & Kim, Y.-S. How do low-vision individuals experience information visualization? In Proc. 2024 CHI Conference on Human Factors in Computing Systems 1–15 (ACM, 2024).

  72. Tseng, Y,-Y., Bell, A. & Gurari, D. VizWiz-FewShot: locating objects in images taken by people with visual impairments. In Computer Vision – ECCV 2022 (eds Avidan, S. et al.) Vol. 13668 (Springer, 2022).

  73. Gurari, D., Zhao, Y., Zhang, M. & Bhattacharya, N. Captioning images taken by people who are blind. In Computer Vision – ECCV 2020 (eds Vedaldi, A. et al.) Vol. 12362, 417–434 (Springer International, 2020).

  74. Chen, C., Anjum, S. & Gurari, D. Grounding answers for visual questions asked by visually impaired people. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 19076–19085 (IEEE, 2022).

  75. Ishmam, M. F., Shovon, M. S. H., Mridha, M. F. & Dey, N. From image to language: a critical analysis of visual question answering (VQA) approaches, challenges, and opportunities. Inf. Fusion. 106, 102270 (2024).

    Article  Google Scholar 

  76. Naik, N., Potts, C. & Kreiss, E. Context-VQA: towards context-aware and purposeful visual question answering. In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2813–2817 (IEEE, 2023).

  77. Fan, D.-P. et al. Camouflaged object detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2774–2784 (IEEE, 2020).

  78. Dessì, R. et al. Cross-domain image captioning with discriminative finetuning. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition 6935–6944 (IEEE, 2023).

  79. Real, S. & Araujo, A. Navigation systems for the blind and visually impaired: past work, challenges, and open problems. Sensors 19, 3404 (2019).

    Article  Google Scholar 

  80. Caraiman, S. et al. Computer vision for the visually impaired: the sound of vision system. In 2017 IEEE Internatinal Conference Computer Vision Workshops (ICCVW) 1480–1489 (IEEE, 2017).

  81. Mandal, M., Ghadiyaram, D., Gurari, D. & Bovik, A. C. Helping visually impaired people take better quality pictures. IEEE Trans. Image Process. 32, 3873–3884 (2023).

    Article  Google Scholar 

  82. Motta, G. et al. In Mobility of Visually Impaired People: Fundamentals and ICT Assistive Technologies (eds Pissaloux, E. & Velazquez, R.) 469–535 (Springer, 2018).

  83. Rodgers, M. D. & Emerson, R. W. Materials testing in long cane design: sensitivity, flexibility, and transmission of vibration. J. Vis. Impair. Blin. 99, 696–706 (2005).

    Article  Google Scholar 

  84. Liu, H. et al. Toward image-to-tactile cross-modal perception for visually impaired people. IEEE Trans. Autom. Sci. Eng. 18, 521–529 (2021).

    Article  Google Scholar 

  85. Dai, X. et al. A phonic braille recognition system based on a self-powered sensor with self-healing ability, temperature resistance, and stretchability. Mater. Horiz. 9, 2603–2612 (2022).

    Article  Google Scholar 

  86. Liu, W. et al. Enhancing blind–dumb assistance through a self-powered tactile sensor-based braille typing system. Nano Energy 116, 108795 (2023).

    Article  Google Scholar 

  87. Thiele, J. E., Pratte, M. S. & Rouder, J. N. On perfect working-memory performance with large numbers of items. Psychon. Bull. Rev. 18, 958–963 (2011).

    Article  Google Scholar 

  88. Wolfe, J. M. & Horowitz, T. S. Five factors that guide attention in visual search. Nat. Hum. Behav. 1, 0058 (2017).

    Article  Google Scholar 

  89. Mangun, G. R. Neural mechanisms of visual selective attention. Psychophysiology 32, 4–18 (1995).

    Article  Google Scholar 

  90. Li, J., Yan, Z., Shah, A., Lazar, J. & Peng, H. Toucha11y: making inaccessible public touchscreens accessible. In Proc. 2023 CHI Conference on Human Factors in Computing Systems 1–13 (ACM, 2023).

  91. World Wide Web Consortium (W3C). Understanding success criterion 1.1.1—understanding WCAG 2.0. W3C https://www.w3.org/TR/UNDERSTANDING-WCAG20/text-equiv-all.html (2023).

  92. He, X. & Deng, L. Deep learning for image-to-text generation: a technical overview. IEEE Signal. Process. Mag. 34, 109–116 (2017).

    Article  Google Scholar 

  93. Liu, S., Zhu, Z., Ye, N., Guadarrama, S. & Murphy, K. Improved image captioning via policy gradient optimization of SPIDEr. In 2017 IEEE International Conference on Computer Vision (ICCV) 873–881 (IEEE, 2017).

  94. Stefanini, M. et al. From show to tell: a survey on deep learning-based image captioning. IEEE Trans. Pattern Anal. Mach. Intell. 45, 539–559 (2023).

    Article  Google Scholar 

  95. MacLeod, H., Bennett, C. L., Morris, M. R. & Cutrell, E. Understanding blind people’s experiences with computer-generated captions of social media images. In Proc. 2017 CHI Conference on Human Factors in Computing Systems 5988–5999 (ACM, 2017).

  96. Nair, V., Zhu, H. H. & Smith, B. A. ImageAssist: tools for enhancing touchscreen-based image exploration systems for blind and low vision users. In Proc. 2023 CHI Conference on Human Factors in Computing Systems 1–17 (ACM, 2023).

  97. Stangl, A., Morris, M. R. & Gurari, D. ‘Person, shoes, tree. Is the person naked?’ What people with vision impairments want in image descriptions. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 1–13 (ACM, 2020).

  98. Gurari, D. et al. VizWiz grand challenge: answering visual questions from blind people. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 3608–3617 (IEEE, 2018).

  99. Murphy, E., Kuber, R., McAllister, G., Strain, P. & Yu, W. An empirical investigation into the difficulties experienced by visually impaired Internet users. Univers. Access. Inf. Soc. 7, 79–91 (2008).

    Article  Google Scholar 

  100. Ohshiro, T., Angelaki, D. E. & DeAngelis, G. C. A normalization model of multisensory integration. Nat. Neurosci. 14, 775–782 (2011).

    Article  Google Scholar 

  101. Talsma, D., Senkowski, D., Soto-Faraco, S. & Woldorff, M. G. The multifaceted interplay between attention and multisensory integration. Trends Cogn. Sci. 14, 400–410 (2010).

    Article  Google Scholar 

  102. Stein, B. E. & Stanford, T. R. Multisensory integration: current issues from the perspective of the single neuron. Nat. Rev. Neurosci. 9, 255–266 (2008).

    Article  Google Scholar 

  103. McDonald, J. J., Teder-Sälejärvi, W. A. & Ward, L. M. Multisensory integration and crossmodal attention effects in the human brain. Science 292, 1791 (2001).

    Article  Google Scholar 

  104. Shams, L. & Seitz, A. R. Benefits of multisensory learning. Trends Cogn. Sci. 12, 411–417 (2008).

    Article  Google Scholar 

  105. Noppeney, U. Perceptual inference, learning, and attention in a multisensory world. Annu. Rev. Neurosci. 44, 449–473 (2021).

    Article  Google Scholar 

  106. Foxe, J. J. Multisensory integration: frequency tuning of audio–tactile integration. Curr. Biol. 19, R373–R375 (2009).

    Article  Google Scholar 

  107. Vercillo, T. & Gori, M. Attention to sound improves auditory reliability in audio–tactile spatial optimal integration. Front. Integr. Neurosci. 9, 34 (2015).

    Article  Google Scholar 

  108. Wang, J., Wang, S. & Zhang, Y. Artificial intelligence for visually impaired. Displays 77, 102391 (2023).

    Article  Google Scholar 

  109. Fei, N. et al. Towards artificial general intelligence via a multimodal foundation model. Nat. Commun. 13, 3094 (2022).

    Article  Google Scholar 

  110. Mukhiddinov, M. & Cho, J. Smart glass system using deep learning for the blind and visually impaired. Electronics 10, 2756 (2021).

    Article  Google Scholar 

  111. Ahmetovic, D., Kwon, N., Oh, U., Bernareggi, C. & Mascetti, S. Touch screen exploration of visual artwork for blind people. In Proc. Web Conference 2021 (WWW) 2781–2791 (ACM, 2021).

  112. Holloway, L., Marriott, K., Butler, M. & Borning, A. Making sense of art: access for gallery visitors with vision impairments. In Proc. 2019 CHI Conference on Human Factors in Computing Systems 1–12 (ACM, 2019).

  113. Liu, H. et al. Angel’s girl for blind painters: an efficient painting navigation system validated by multimodal evaluation approach. IEEE Trans. Multimed. 25, 2415–2429 (2023).

    Article  Google Scholar 

  114. Bornschein, J., Bornschein, D. & Weber, G. Blind Pictionary: drawing application for blind users. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems 1–4 (ACM, 2018).

  115. Granquist, C. et al. Evaluation and comparison of artificial intelligence vision aids: Orcam MyEye 1 and Seeing AI. J. Vis. Impair. Blind. 115, 277–285 (2021).

    Article  Google Scholar 

  116. Goncu, C. & Marriott, K. In Human–Computer Interaction—INTERACT 2011 (eds Campos, P. et al.) 30–48 (Springer, 2011).

  117. de Jesus Oliveira, V. A., Brayda, L., Nedel, L. & Maciel, A. Designing a vibrotactile head-mounted display for spatial awareness in 3D spaces. IEEE Trans. Vis. Comput. Graph. 23, 1409–1417 (2017).

    Article  Google Scholar 

  118. Peiris, R. L., Peng, W., Chen, Z. & Minamizawa, K. Exploration of cuing methods for localization of spatial cues using thermal haptic feedback on the forehead. In 2017 IEEE World Haptics Conference (WHC) 400–405 (IEEE, 2017).

  119. Tang, Y. et al. Advancing haptic interfaces for immersive experiences in the metaverse. Device 2, 100365 (2024).

    Article  Google Scholar 

  120. Leng, S. et al. Mitigating object hallucinations in large vision-language models through visual contrastive decoding. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 13872–13882 (IEEE, 2024).

  121. Zhou, Y. et al. Analyzing and mitigating object hallucination in large vision-language models. In 12th International Conference on Learning Representations (ICLR, 2024).

  122. Zhang, Y.-F. et al. Debiasing multimodal large language models. Preprint at https://doi.org/10.48550/arXiv.2403.05262 (2024).

  123. Jiang, L., Jung, C., Phutane, M., Stangl, A. & Azenkot, S. ‘It’s kind of context dependent’: Understanding blind and low vision people’s video accessibility preferences across viewing scenarios. In Proc. 2024 CHI Conference on Human Factors in Computing Systems 1–20 (ACM, 2024).

  124. Nevsky, A., Neate, T., Simperl, E. & Vatavu, R.-D. Accessibility research in digital audiovisual media: what has been achieved and what should be done next? In Proc. 2023 ACM Internatinal Conferenceon Interactive Media Experiences 94–114 (ACM, 2023).

  125. Yan, W., Zhang, Y., Abbeel, P. & Srinivas, A. VideoGPT: video generation using VQ-VAE and transformers. Preprint at https://doi.org/10.48550/arXiv.2104.10157 (2021).

  126. Li, K. et al. VideoChat: chat-centric video understanding. Preprint at https://doi.org/10.48550/arXiv.2305.06355 (2024).

  127. Zhang, H., Li, X. & Bing, L. Video-LLaMA: an instruction-tuned audio-visual language model for video understanding. In Proc. 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations 543–553 (ACL, 2023).

  128. Tashakori, A. et al. Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves. Nat. Mach. Intell. 6, 106–118 (2024).

    Article  Google Scholar 

  129. Liu, G. et al. Keep the phone in your pocket: enabling smartphone operation with an IMU ring for visually impaired people. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 4, 1–23 (2020).

    Article  Google Scholar 

  130. Feng, C. Designing wearable mobile device controllers for blind people: a co-design approach. In Proc. 18th International .ACM SIGACCESS Conference on Computers and Accessibility (ASSETS) 341–342 (ACM, 2016).

  131. Wang, R., Yu, C., Yang, X.-D., He, W. & Shi, Y. EarTouch: facilitating smartphone use for visually impaired people in mobile and public scenario. In Proc. 2019 CHI Conference on Human Factors in Computing Systems 1–13 (ACM, 2019).

  132. Ye, H., Malu, M., Oh, U. & Findlater, L. Current and future mobile and wearable device use by people with visual impairments. In Proc. SIGCHI Conference on Human Factors in Computing Systems 3123–3132 (ACM, 2014).

  133. Vatavu, R.-D. & Vanderdonckt, J. What gestures do users with visual impairments prefer to interact with smart devices? And how much we know about it. In Companion Publication of the 2020 ACM Designing Interactive Systems Conference 85–90 (ACM, 2020).

  134. Zhao, L., Liu, Y., Ye, D., Ma, Z. & Song, W. Implementation and evaluation of touch-based interaction using electrovibration haptic feedback in virtual environments. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) 239–247 (IEEE, 2020).

  135. Young, G. et al. Designing mid-air haptic gesture controlled user interfaces for cars. Proc. ACM Hum. Comput. Interact. 4, 1–23 (2020).

    Article  Google Scholar 

  136. Tchantchane, R., Zhou, H., Zhang, S. & Alici, G. A review of hand gesture recognition systems based on noninvasive wearable sensors. Adv. Intell. Syst. 5, 2300207 (2023).

    Article  Google Scholar 

  137. Teranishi, A., Mulumba, T., Karafotias, G., Alja’am, J. M. & Eid, M. Effects of full/partial haptic guidance on handwriting skills development. In 2017 IEEE World Haptics Conference (WHC) 113–118 (IEEE, 2017).

  138. Zahedi, E., Dargahi, J., Kia, M. & Zadeh, M. Gesture-based adaptive haptic guidance: a comparison of discriminative and generative modeling approaches. IEEE Robot. Autom. Lett. 2, 1015–1022 (2017).

    Article  Google Scholar 

  139. Curcic, D. Reading speed statistics. WordsRated https://wordsrated.com/reading-speed-statistics/ (2021).

  140. Wright, T., Wormsley, D. P. & Kamei-Hannan, C. Hand movements and braille reading efficiency: data from the Alphabetic Braille and Contracted Braille Study. J. Vis. Impair. Blind. 103, 649–661 (2009).

    Article  Google Scholar 

  141. Legge, G. E., Madison, C. M. & Mansfield, J. S. Measuring braille reading speed with the MNREAD test. Vis. Impair. Res. 1, 131–145 (1999).

    Article  Google Scholar 

  142. McCarthy, T., Holbrook, C., Kamei-Hannan, C. & D’Andrea, F. M. Speed and accuracy measures of school-age readers with visual impairments using a refreshable braille display. J. Spec. Educ. Technol. 38, 423–433 (2023).

    Article  Google Scholar 

  143. Guerreiro, J. & Gonçalves, D. Faster text-to-speeches: enhancing blind people’s information scanning with faster concurrent speech. In Proc. 17th International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS) 3–11 (ACM, 2015).

  144. Choi, D., Kwak, D., Cho, M. & Lee, S. ‘Nobody speaks that fast!’ An empirical study of speech rate in conversational agents for people with vision impairments. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 1–13 (ACM, 2020).

  145. Tauroza, S. & Allison, D. Speech rates in British English. Appl. Linguist. 11, 90–105 (1990).

    Article  Google Scholar 

  146. Zhang, D., Zhou, L., Uchidiuno, J. O. & Kilic, I. Y. Personalized assistive web for improving mobile web browsing and accessibility for visually impaired users. ACM Trans. Access. Comput. 10, 1–22 (2017).

    Article  Google Scholar 

  147. Bigham, J. P., Cavender, A. C., Brudvik, J. T., Wobbrock, J. O. & Ladner, R. E. WebinSitu: a comparative analysis of blind and sighted browsing behavior, In Proc. 9th International ACM SIGACCESS Conference on Computers and Accessibility 51–58 (ACM, 2007).

  148. Takagi, H., Saito, S., Fukuda, K. & Asakawa, C. Analysis of navigability of Web applications for improving blind usability. ACM Trans. Comput.-Hum. Interact. 14, 13 (2007).

    Article  Google Scholar 

  149. Foulke, E. Listening comprehension as a function of word rate. J. Commun. 18, 198–206 (1968).

    Article  Google Scholar 

  150. Kuperman, V., Kyröläinen, A.-J., Porretta, V., Brysbaert, M. & Yang, S. A lingering question addressed: reading rate and most efficient listening rate are highly similar. J. Exp. Psychol. Hum. Percept. Perform. 47, 1103–1112 (2021).

    Article  Google Scholar 

  151. Brysbaert, M. How many words do we read per minute? A review and meta-analysis of reading rate. J. Mem. Lang. 109, 104047 (2019).

    Article  Google Scholar 

  152. Rayner, K., Schotter, E. R., Masson, M. E. J., Potter, M. C. & Treiman, R. So much to read, so little time: how do we read, and can speed reading help? Psychol. Sci. Public. Interest. 17, 4–34 (2016).

    Article  Google Scholar 

  153. Nocera, F. D., Ricciardi, O. & Juola, J. F. Rapid serial visual presentation: degradation of inferential reading comprehension as a function of speed. Int. J. Hum. Factors Ergon. 5, 4 (2018).

    Google Scholar 

  154. Tuckute, G. et al. Driving and suppressing the human language network using large language models. Nat. Hum. Behav. 8, 544–561 (2024).

    Article  Google Scholar 

  155. Mahowald, K. et al. Dissociating language and thought in large language models. Trends Cogn. Sci. 28, 517–540 (2024).

    Article  Google Scholar 

  156. Guinness, D., Cutrell, E. & Morris, M. R. Caption crawler: enabling reusable alternative text descriptions using reverse image search. In Proc. 2018 CHI Conference on Human Factors in Computing Systems 1–11 (ACM, 2018).

  157. Stangl, A., Verma, N., Fleischmann, K. R., Morris, M. R. & Gurari, D. Going beyond one-size-fits-all image descriptions to satisfy the information wants of people who are blind or have low vision. In Proc. 23rd International ACM SIGACCESS Conference Computers and Accessibility (ASSETS) 1–15 (ACM, 2018).

  158. Srivatsan, N., Samaniego, S., Florez, O. & Berg-Kirkpatrick, T. Alt-text with context: improving accessibility for images on Twitter. In 12th International Conference on Learning Representations (ICLR, 2024).

  159. Wang, J. et al. Effect of frame rate on user experience, performance, and simulator sickness in virtual reality. IEEE Trans. Visual. Comput. Graph. 29, 2478–2488 (2023).

    Article  Google Scholar 

  160. Brunnström, K. et al. Latency impact on quality of experience in a virtual reality simulator for remote control of machines. Signal. Process. Image Commun. 89, 116005 (2020).

    Article  Google Scholar 

  161. Jin, X., Li, L., Dang, F., Chen, X. & Liu, Y. A survey on edge computing for wearable technology. Digit. Signal. Process. 125, 103146 (2022).

    Article  Google Scholar 

  162. Lu, A. et al. High-speed emerging memories for AI hardware accelerators. Nat. Rev. Electr. Eng. 1, 24–34 (2024).

    Article  Google Scholar 

  163. Luo, Y. et al. Technology roadmap for flexible sensors. ACS Nano 17, 5211–5295 (2023).

    Article  Google Scholar 

  164. Wang, P. et al. The evolution of flexible electronics: from nature, beyond nature, and to nature. Adv. Sci. 7, 2001116 (2020).

    Article  Google Scholar 

  165. Krause, A., Smailagic, A. & Siewiorek, D. P. Context-aware mobile computing: learning context-dependent personal preferences from a wearable sensor array. IEEE Trans. Mob. Comput. 5, 113–127 (2006).

    Article  Google Scholar 

  166. Connolly, A. C., Gleitman, L. R. & Thompson-Schill, S. L. Effect of congenital blindness on the semantic representation of some everyday concepts. Proc. Natl Acad. Sci. USA 104, 8241–8246 (2007).

    Article  Google Scholar 

  167. Chebat, D.-R., Schneider, F. C. & Ptito, M. Spatial competence and brain plasticity in congenital blindness via sensory substitution devices. Front. Neurosci. 14, 815 (2020).

    Article  Google Scholar 

  168. Ho, D. K. Voice-controlled virtual assistants for the older people with visual impairment. Eye 32, 53–54 (2018).

    Article  Google Scholar 

  169. Leo, F., Cocchi, E. & Brayda, L. The effect of programmable tactile displays on spatial learning skills in children and adolescents of different visual disability. IEEE Trans. Neural Syst. Rehabil. Eng. 25, 861–872 (2017).

    Article  Google Scholar 

  170. Ehtesham-Ul-Haque, M., Monsur, S. M. & Billah, S. M. Grid-coding: an accessible, efficient, and structured coding paradigm for blind and low-vision programmers. In Proc. 35th Annual ACM Symposium User Interface Software and Technology 1–21 (ACM, 2022).

  171. Huh, M. et al. AVscript: accessible video editing with audio-visual scripts. In Proc. 2023 CHI Conference on Human Factors in Computing Systems 1–17 (ACM, 2023).

  172. Becker, S., Wahl, H.-W., Schilling, O. & Burmedi, D. Assistive device use in visually impaired older adults: role of control beliefs. Gerontologist 45, 739–746 (2005).

    Article  Google Scholar 

  173. McCreadie, C. & Tinker, A. The acceptability of assistive technology to older people. Ageing Soc. 25, 91–110 (2005).

    Article  Google Scholar 

  174. Wen, F., Zhang, Z., He, T. & Lee, C. AI enabled sign language recognition and VR space bidirectional communication using triboelectric smart glove. Nat. Commun. 12, 5378 (2021).

    Article  Google Scholar 

  175. Siu, A., S.-H. Kim, G., O’Modhrain, S. & Follmer, S. Supporting accessible data visualization through audio data narratives. In CHI Conference on Human Factors in Computing Systems 1–19 (ACM, 2022).

  176. Bodi, A. et al. Automated video description for blind and low vision users. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems 1–7 (ACM, 2021).

  177. Huang, J., Siu, A., Hanocka, R. & Gingold, Y. ShapeSonic: sonifying fingertip interactions for non-visual virtual shape perception. In SIGGRAPH Asia 2023 Conference Papers 1–9 (ACM, 2023).

  178. Başçiftçi, F. & Eldem, A. An interactive and multi-functional refreshable braille device for the visually impaired. Displays 41, 33–41 (2016).

    Article  Google Scholar 

  179. Koone, J. C. et al. Data for all: tactile graphics that light up with picture-perfect resolution. Sci. Adv. 8, eabq2640 (2022).

    Article  Google Scholar 

  180. Matsunaga, T., Totsu, K., Esashi, M. & Haga, Y. Tactile display using shape memory alloy micro-coil actuator and magnetic latch mechanism. Displays 34, 89–94 (2013).

    Article  Google Scholar 

  181. Engel, C., Konrad, N. & Weber, G. In Computers Helping People with Special Needs (ICCHP) 446–455 (Springer, 2020).

  182. Carcedo, M. G. et al. HaptiColor: interpolating color information as haptic feedback to assist the colorblind. In Proc. 2016 CHI Conference on Human Factors in Computing Systems 3572–3583 (ACM, 2016).

  183. Hwang, I. et al. Height-renderable morphable tactile display enabled by programmable modulation of local stiffness in photothermally active polymer. Nat. Commun. 15, 2554 (2024).

    Article  Google Scholar 

  184. Keef, C. V. et al. Virtual texture generated using elastomeric conductive block copolymer in wireless multimodal haptic glove. Adv. Intell. Syst. 2, 2000018 (2020).

    Article  Google Scholar 

  185. Bleau, M., van Acker, C., Martiniello, N., Nemargut, J. P. & Ptito, M. Cognitive map formation in the blind is enhanced by three-dimensional tactile information. Sci. Rep. 13, 9736 (2023).

    Article  Google Scholar 

  186. Sharif, A., Wang, O. H., Muongchan, A. T., Reinecke, K. & Wobbrock, J. O. VoxLens: making online data visualizations accessible with an interactive JavaScript Plug-In. In CHI Conference on Human Factors in Computing Systems 1–19 (ACM, 2022).

  187. Ramôa, G., Moured, O., Schwarz, T., Müller, K. & Stiefelhagen, R. Enabling people with blindness to distinguish lines of mathematical charts with audio-tactile graphic readers. In Proc. 16th International Conference on Pervasive Technologies Related to Assistive Environments 384–391 (ACM, 2023).

  188. Thapa, R. B., Ferati, M. & Giannoumis, G. A. Using non-speech sounds to increase web image accessibility for screen-reader users. In Proc. 35th ACM International Conference on Design of Communication 1–9 (ACM, 2017).

  189. Yuksel, B. F. et al. Human-in-the-Loop Machine Learning to Increase Video Accessibility for Visually Impaired and Blind Users. In Proc. 2020 ACM Designing Interactive Systems Conference (DIS) 47–60 (ACM, 2020).

  190. Holloway, L., Marriott, K. & Butler, M. Accessible maps for the blind: comparing 3D printed models with tactile graphics. In Proc. 2018 CHI Conference on Human Factors in Computing Systems 1–13 (ACM, 2018).

  191. Hart, S. G. & Staveland, L. E. Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Adv. Psychol. 52, 139–183 (1988).

    Article  Google Scholar 

Download references

Acknowledgements

Y.M. acknowledges financial support from the Hong Kong Research Grants Council (ECS 25228722). X.M. acknowledges funding from MTC Young Individual Research Grants (YIRG M23M7c0129).

Author information

Authors and Affiliations

Authors

Contributions

W.X. and Y.R. researched data for the article. W.X., Y.R., Y.T., Z.G., X.M., J.Z. and Y.M. contributed to writing and editing of this manuscript. Y.T., X.M., Z.W., H.X., T.Z., Z.C., X.D. and Y.M. contributed to review of the manuscript before submission. All authors contributed substantially to discussions of the article content.

Corresponding authors

Correspondence to Xuezhi Ma  (马学智) or Yuan Ma  (马源).

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Reviews Electrical Engineering thanks Markus Heß, Junwen Zhong and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Related links

Be My Eyes: https://www.bemyeyes.com/

OrCam MyEye 3 Pro: https://www.orcam.com/en-us/orcam-myeye-3-pro

Ray-Ban Meta AI glasses: https://www.meta.com/ai-glasses/

Seeing AI: https://www.seeingai.com/

Supplementary information

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, W., Ren, Y., Tang, Y. et al. Interactive wearable digital devices for blind and partially sighted people. Nat Rev Electr Eng 2, 425–439 (2025). https://doi.org/10.1038/s44287-025-00170-w

Download citation

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s44287-025-00170-w

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics