D. Ryumin, A. Axyonov, A. Karpov
A Russian sign language corpus is collected that focussed on a format, which used handshape, location, hand trajectory and hand orientation as the key features to be captured. Each sign was repeated to build a machine learning friendly dataset. Kinect 2.0 camera recorded the gestures was used to collect the corpus that was aligned and annotated afterwards.