The rise of VR technology has made motion capture, a technology that allows people to interact and communicate in the virtual world using body language, a growing favorite. With the advancement of technology, today's motion capture technology is very mature, but it is still challenging to accurately capture human movement in large-scale scenes. It is crucial for the reconstruction, simulation, and generation of sports mega-events, stage performances, crowd interaction, etc., and has received a great deal of attention.
So far, motion capture is mostly performed based on optics, which has high environmental requirements. In contrast, motion capture using inertial information recorded by IMUs is free from occlusion and environmental influences. However, this purely inertial approach currently still suffers from time-accumulated errors in global localization, especially when capturing large-scale motion over long periods of time.
To address the above challenges, the researchers propose a LiDAR-assisted inertial pose approach to accurately capture challenging human motions in large-scale scenarios, using only a single LiDAR and four IMUs and eliminating translational bias in a pose-guided manner to obtain accurate global trajectories and natural continuous motions. A huge hybrid LiDAR-IMU motion capture dataset is also presented for future human motion analysis studies.