Fig. 4: Prediction of ASL using the GenENet device. | Nature Sensors

Fig. 4: Prediction of ASL using the GenENet device.

From: A simplified wearable device powered by a generative EMG network for hand-gesture recognition and gait prediction

Fig. 4

a, Sign language input signals captured through the six-channel device. b, Post-processing steps identical to pre-training, excluding data augmentation. c, Post-processed tensors are fed into GenENet, connected to a CNN, LSTM and dense layer. The dashed line of the decoder and CNN are only activated on regression modelling. d, Classification of sign language gestures. e, FOM measured by balancing model accuracy and total sensor area. f, FOM peaks in the six-channel region, where increasing channel count enhances accuracy but also enlarges the sensor area. g, Validation accuracy comparison between the pre-trained GenENet and the non-parameterized GenENet using six-electrode EMG array measurements for finger motion recognition. The dataset is divided into training and validation datasets with a ratio of 8:2. h, Adaptability of the device to different locations and orientations on the wrist, showing negligible accuracy differences. L1–L7 indicate the location and orientation of the electrode array attachment. i, Sign language prediction using numeric values (0–25, that is 0 for A and 25 for Z) from 6-channel EMG inputs. The red plot represents the predicted numeric values, and the corresponding alphabet labelled on top, from the EMG input signals. j, Batch attribution map for representative letters A, N and R, with corresponding EMG signals and attribution maps. ‘−1’ indicates a negative contribution to prediction and ‘1’ indicates a positive contribution to prediction. Each row in the signal corresponds to one of the six sensor channels, with channel 1 at the bottom and channel 6 at the top.

Back to article page