Gesture recognition technology is used in many fields, such as computer-assisted silent language teaching, remote control of robots, gaming and entertainment, military and medical research, and also helps to improve the quality of life of deaf and mute people and promote normal communication among them. Currently, the mainstream gesture recognition methods are divided into two main categories: vision-based gesture recognition methods and gesture recognition methods based on short-coupled sensors. Vision-based gesture recognition does not require the user to wear any equipment, but the recognition results are easily affected by environmental factors such as light intensity, complex backgrounds, and viewing angle occlusion.
Gesture recognition methods based on Jet-Link sensors mainly rely on hardware devices such as data gloves for gesture input, through which computers can obtain information such as the pose of the human hand in space and finger extension.
The team of Jinquan Li at Beipiao University has designed a new method using hand gestures as character input for virtual keyboards. The method designs a noninvasive end-to-end continuous dynamic gesture recognition system that combines inertial measurement unit (IMU) signals and surface electromyography (sEMG) signals. After preprocessing and fusion of IMU signals and sEMG signals, the fused gesture features, including acceleration, angular velocity, posture quaternion and sEMG information, are extracted. The team constructed a network model based on bidirectional long short-term memory (BiLSTM) network and connectionist temporal classification (CTC) as a loss function, which avoids the detrimental effect of inaccurate pre-segmentation of gesture sequences on continuous dynamic gesture recognition, and realizes end-to-end continuous dynamic gesture recognition.
The team used the MYO wristband as a hardware device for collecting gesture data. The MYO wristband integrates sEMG and IMU sensors and transmits the collected gesture signals to a host computer via Bluetooth.
MYO Wristband Wearing Position
The collected data is first processed by the internal filter of the MYO wristband, then mean filtering is used to further suppress the noise in the signal, followed by extraction of IMU signal features and sEMG signal features combined into a new feature matrix.
The team has designed a neural network combining BiLSTM and CTC structures for gesture training and recognition.
Schematic diagram of the network structure of the end-to-end continuous gesture recognition system
The experimental results show that IMU signals and sEMG signals are complementary in the continuous dynamic gesture recognition task, and the use of their fusion features for gesture classification improves the recognition effect and recognition rate, and the average recognition rate of continuous dynamic gestures for independent users is to reach 98.66%.
Link to paper: https://ieeexplore.ieee.org/document/9911614/
Inertia Master Technology--To make quality products
New stock IMU chips genuine original guarantee