Articulation-Invariant Stroke-Gesture Recognizer for Low-Resource Devices

Citation:

Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock. 2018. $Q: A Super-Quick, Articulation-Invariant Stroke-Gesture Recognizer for Low-Resource Devices. Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’2018), Barcelona, Spain, September 3-6, 2018, Article No. 23.

Abstract:

We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q was up to 142x faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P’s computations without any accuracy loss. In our most extreme evaluation demanding over 99% user-independent recognition accuracy, $P required 9.4s to run a single classification, while $Q completed in just 191ms (a 49x speed-up) on a Cortex-A7, one of the most widespread CPUs on the mobile market. $Q was even faster on a low-end 600-MHz processor, on which it executed in only 0.7% of $P’s computations (a 142x speed-up), reducing classification time from two minutes to less than one second. $Q is the next major step for the “$-family” of gesture recognizers: articulation-invariant, extremely fast, accurate, and implementable on top of $P with just 30 extra lines of code.”

File attachments:

Vatavu-et-al-MobilHCI2018