Summary
Look at a series of gestures occurring in Tai-Chi captured by video. Extract a lot of features about the gestures, including plain (x,y,z) coordinates, velocities for these coords, polar coordinates, polar velocities. Do each of these with and without head data (always with hand data). Plug all the different sets of features into an HMM and see which feature set does the best. Polar velocity with no head does the best at about 95% accuracy overall. Plain (x,y,z) does the worst at about 34% overall.
Discussion
Just take a bunch of features and string them all together. Perform a standard feature extraction/selection algorithm. Get a set of features that probably outperforms all your sets.
Win.
This paper isn't interesting, really, as it just shotguns a bunch of features into an HMM and see who wins.
Fail.
No comments:
Post a Comment