Fig. 4: Identifying human hand motions from detected skin deformation. | Nature Communications

Fig. 4: Identifying human hand motions from detected skin deformation.

From: A deep-learned skin sensor decoding the epicentral human motions

Fig. 4

a Depiction of skin deformations for different finger bending motions. b Metric space defining single finger bending motions: physical alignment of fingers in a hand is expressed in the metric space with R representing the amount of a finger bent and θ identifying the position of a finger in a hand. c Neural network is composed of an encoding network and a decoding network. LSTM layers are used in encoding network to analyze temporal sensor patterns to generate latent vectors. Two independent dense layers map created latent vectors to our metric space expressing hand motions. Dropout is used as the regularization technique to prevent the network to be overfitted to a single use case. d 2D PCA illustration of output vectors produced by encoding network. Each circular cluster demonstrates that encoding network can correctly identify cyclic finger motions from sequential sensor inputs. e Figure of how sensor inputs in training dataset are mapped to the metric space after passing our network. f The processes of rapid situation learning (RSL) that utilizes transfer learning. When the sensor is attached to a new position and a small amount of retraining data is collected, the new network utilizes knowledge learned during pretraining by transferring parameters from pretrained network, reducing the amount of dataset, and time for retraining. g Photo of actual hand motion generation.

Back to article page