With the development of technology, human-computer interaction has more and more applications in business. In the face of increasingly complex interaction scenarios, gesture recognition has gradually become the main interaction means for virtual reality and other related applications.
3D gesture recognition is a challenging problem, and there are three basic types of commonly used gesture sensors: multi-touch screen sensors, vision-based sensors, and mounting-based sensors. The accuracy of gesture recognition is currently high enough, but multi-feature 3D gesture recognition with both gesture information and positional information is not mature enough.
In order to realize 3D pen interaction with multiple features and include additional information for more complex interaction scenarios, 3D gesture recognition must consist of trajectory shape, motion direction and pen gesture. Researchers from Jilin University proposed a 3D gesture recognition method based on IMU and ultrasonic localization technology, which uses multi-channel data such as acceleration to describe 3D gesture attributes, and is able to effectively recognize the pen position and gesture, which solves the problem that traditional gesture recognition methods cannot recognize 3D gestures containing multiple attributes.