Thursday, December 13, 2007

Wais et al. -- Perception of Interface Elements

Wais, Paul, Aaron Wolin, and Christine Alvarado. "Designing a Sketch Recognition Front-End: User Perception of Interface Elements." Eurographics 2007.

Summary



Wais, Wolin, and Alvarado perform a Wizard of Oz study to examine several interface aspects of a sketch-based circuit diagram application. They wanted to examine three aspects of sketch recognition systems:

  1. When to perform recognition and and how to provide feedback about the results

  2. How the user can indicate which parts of the sketch she wants recognized

  3. Measure how recognition errors impact usability



The authors experimented with nine subjects, asking them to draw circuit diagrams with a pen-based input device (subjects had experience with the domain and devices both). Three sets of experiments tested the above aspects. For the first aspect, the system either used a button press, a check-tap gesture, or a delay of 4 seconds as a trigger for starting stroke recognition, with users preferring the button as it gave them reliable control (the gesture was difficult to recognize, 1/6 times worked). They preferred to use the button to recognize large chunks all at once. When recognition occurred, most users preferred the stroke lines to change color based on recognition status (recognized or not) rather than text labels to appear, as the labels cluttered the drawing. However, most users still hovered over the "recognized" portions (no actual recognition took place) to check to see if the label was correct. The system also introduced different types of random errors into the sketch. Errors themselves didn't seem to bother users too much, as long as they are predictable and lend themselves to learning and correction. Random errors did not provide this ability and were frustrating to users. Also, users want separate spaces, or separate colors, to determine what should be recognized in a sketch and what should be left alone.

Discussion



This was a neat paper. It was nice to see an interface paper saying what people actually liked and what works in a sketch recognition application. I'm not an interfaces person, so this was really the first time I had seen this addressed.

Regarding their pause trigger for recognition: it seems like this might be something that can be learned. Say you take the mean amount of time between the user's last pen stroke and when they press the 'recognize' button. This would capture their average amount of time. Of course, I think using a pause is a bad idea in general. Users expressed the desire for control and predictability. Give it to them rather than having a magical timeout that they neither see nor control.

I would have liked to have seen some interface issues concerning ways of correcting errors. Their method is just for the user to erase the problematic stroke and make it again. Obviously they're just using a dummy application for their Wizard of Oz approach, but I think it would be nice to have a drop down for an n-best list, one of the options being "None of the Above, Plan 9." This magical option would return the strokes to their original form and allow the users to group with a lasso or something the things that should be recognized. This seems like a burden to the user, especially when the push is for things that are fully automatic and uber-accurate. Well, maybe that's not realistic, at least not yet. Even if it is uber-accurate, it will still make mistakes. If the user can help it fix those mistakes, it can possibly learn. Or, at least make it easier for the user to clean up your program's boo boo.

Also, I've nearly decided that gestures are horrible. We already have an application with imperfect sketch recognition, and we're throwing in mega-imperfect gestures! I wonder if anyone knows if there is a good gesture toolkit out there, and I'm talking like 99.99%, none of this 1/6 (ONE CORRECT TO FIVE INCORRECT, 16%) laaaaaame product. But this isn't their fault, it's the gesture toolkit's fault.

No comments: