Tuesday, February 7, 2012

Context-Aware 3D Gesture Interaction Based on Multiple Kinects

Maurizio Caon1, Yong Yue, Julien Tscherrig, Elena Mugellini, Omar Abou Khaled

This paper presents research into using two Kinects simultaneously to let a user control his or her environment by pointing at "smart objects".  These smart objects are added to the environment and recognized by the system beforehand. Different combinations of gestures (pointing), and posture (standing, sitting) cause different actions.  For example, sitting on the couch and pointing at a media player will turn on the TV, but standing and pointing at it will instead turn on the radio.

Each Kinect sends the skeleton data represented in XML for each person it is tracking to a central module.  This module then combines the skeleton data (weighting the data with more joints more heavily), and combines it into a 3D skeleton model. Their gesture recognition is fairly simple, when the arm joints assume specific values, they do a projection of the arm into space ahead of it to see if it intersects with any smart objects.


No comments:

Post a Comment