INIT Lab collaboration results in new gesture recognizer to appear at MobileHCI’2018!

We are excited to announced that there is a new member of the $-family of gesture recognizers! A paper on a new super-quick recognizer optimized for today’s low-resource devices (e.g., wearable, embedded, and mobile devices) that I (Lisa) co-wrote with my long-time collaborators, Radu-Daniel Vatavu and Jacob O. Wobbrock, will appear at the upcoming International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’2018) conference. The paper extends the current best-performing, most robust member of the $-family, $P, using some clever code optimizations to short-cut much of the computation $P undertakes, and makes this recognizer, which we call $Q, blazingly fast and able to work in real-time on low-power devices. Here is the abstract:

We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q ran up to 142× faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P’s computations without any accuracy loss. In our most extreme evaluation demanding over 99% user-independent recognition accuracy, $P required 9.4s to run a single classification, while $Q completed in just 191ms (a 49× speed-up) on a Cortex-A7, one of the most widespread CPUs on the mobile market. $Q was even faster on a low-end 600-MHz processor, on which it executed in only 0.7% of $P’s computations (a 142× speed-up), reducing classification time from two minutes to less than one second. $Q is the next major step for the “$-family” of gesture recognizers: articulation-invariant, extremely fast, accurate, and implementable on top of $P with just 30 extra lines of code.

Radu will be presenting this work in the fall in Barcelona. Check out the camera-ready version of our paper here.