Using Kinect and a Haptic Interface for Implementation of Real-Time Virtual Fixtures
Fredrik Ryd´en, Howard Jay Chizeck, Sina Nia Kosari, Hawkeye King and Blake Hannaford
This paper uses the depth feature of the Kinect camera to create real-time haptic virtual fixtures to aid with robotic surgery. Essentially, these images serve to give the surgeon an indication of where and where not to cut. This seems like a really good thing to have available to your surgeon, and it's cool that they are getting results using technology initially intended for video games. The ability to come up with these virtual fixtures in real-time allows the doctors to compensate for movements and deformations during a surgical procedure. Normally, these images are produced using a CT scan. However, it's quite complicated to continuously CT scan during a surgery, so using the Kinect presents a more viable solution.
The haptic forces used in this paper are generated from a point cloud, which is the depth data taken from the Kinect. This data is sent from the computer attached to the Kinect to a computer that is attached to the haptic device. What this allows is for the surgeon (or whoever is manning the device) to see the effects that their "touch" will have. An example given in the paper is that by moving your hand up will force the haptic device to move along with the hand.
While this doesn't directly relate to our project, I feel that it is a good example of the versatility of the Kinect, and how it has a myriad of applications that go beyond video games (such as ours).