Fig. 4: Tactile-visual crossmodal recognition.
From: Bioinspired multisensory neural network with crossmodal integration and recognition

a Illustration of the human ability to recognize and visualize tactile input. b Schematic of the artificial tactile-visual system. Tactile inputs from an array of 5 × 5 pressure sensors are dimensionally reduced to five data streams (one photomemristor per five sensors). The visual data stream consists of 25 channels. The ANN consists of five input, thirteen hidden, and 25 output neurons. c Vision memory (photomemristors PSC states) recorded after projecting optical images of the alphabet letters A–Z onto an array of 5 × 5 photodetectors for 2 s. The vision memory supervises training of the ANN with tactile inputs. d Images, vision memory, and handwritten tactile inputs of alphabet letters A–Z. The fourth and eight rows show the images of alphabet letters that are recognized and reproduced by handwritten inputs after ten training epochs. e Summary of reproduced vision vectors. The data correspond closely to the vision memory shown in (c), demonstrating tactile-visual sensory integration and crossmodal recognition.