Summary
I have a robot that I want to teach to grab things. I can teach it by example. I have 14 different types of grips that I use everyday. I'll put pressure sensors in a glove under the CyberGlove. I will grab something, and then let it go. All of this data will be fed into an HMM in the HTK speech recognition toolkit. The HMM will tell me which grasp I am making with up to about 90% accuracy.
Discussion
Pretty neat. If you know what you're grasping, you can do things like activity recognition and such. Especially helpful when you start using smart rooms and offices, etc. Maybe even Information Oriented Programming (IOP)!
I think the pressure sensors really helped augment the CyberGlove, especially since there were so many grasp categories.
No comments:
Post a Comment