Saturday, May 10, 2008

Bernardin - Grasping HMMs

Bernardin, K., K. Ogawara, et al. (2005). "A sensor fusion approach for recognizing continuous human grasping sequences using hidden Markov models." Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on] 21(1): 47-57.

Summary



I have a robot that I want to teach to grab things. I can teach it by example. I have 14 different types of grips that I use everyday. I'll put pressure sensors in a glove under the CyberGlove. I will grab something, and then let it go. All of this data will be fed into an HMM in the HTK speech recognition toolkit. The HMM will tell me which grasp I am making with up to about 90% accuracy.

Discussion



Pretty neat. If you know what you're grasping, you can do things like activity recognition and such. Especially helpful when you start using smart rooms and offices, etc. Maybe even Information Oriented Programming (IOP)!

I think the pressure sensors really helped augment the CyberGlove, especially since there were so many grasp categories.

No comments: