The cognitive development project, which is related to the Understanding Gestures project, is one of the projects in which I have been taking part since joining the INIT Lab. As introduced in the previous post, the goal of the project is to investigate the relationship between children’s cognitive development and their interactions with touchscreens, and to use this understanding to improve recognition rates for children’s gestures. Further details can be found in the previous post.
As the first step to familiarize myself with the project, I implemented the $P Point-Cloud Recognizer [1], which is the recognizer we use in the project to compute the recognition rates of children’s gestures. Based on the results, we plan to examine the relationship between children’s scores on the cognitive and motor assessments and the accuracies of their interactions in touchscreen tasks. Implementing the $P Recognizer [1] introduced me to the field of touchscreen gesture recognition. I gained new knowledge about the recognizer, which represents gestures as unordered point clouds instead of ordered points (i.e., strokes), and algorithms for recognizing the point clouds. I also learned two different ways of evaluating a gesture recognizer, which are the user-dependent scenario and the user-independent scenario. In the user-dependent scenario, recognition rates are computed individually for each user, which means all training samples are taken from a single writer’s gestures, and the test sample is taken from that same writer. We conduct user-dependent test to examine the performance of a recognizer when trained and tested on a specific user (best case accuracy). For the user-independent scenario, which is the cases where training data for new users are unavailable, gestures from a number of writers are used for training while a single additional writer is selected for testing. We conduct the user-independent test to examine how well a recognizer can generalize to gestures by users whose gestures are not included in the training set (more realistic off-the-shelf accuracy).
The $P recognizer is a member of the larger $-family of recognizers, which includes $1 [2], $N [3], and $Q [4] recognizers. They are low-cost, easy to understand and implement recognizers designed for rapid prototyping. These recognizers achieve 98-99% recognition rates for adults, but as low as 64% for 5-year-old children. Previous work in the project has focused on extending these algorithms to achieve better recognition rates for children’s gestures. In my future work, I am hoping to investigate deep learning-based approaches for recognizing children’s gestures.
While working on the project, I have been able to observe the process of running user studies with younger children. Compared with the user research I did in my User Experience Design class, where I only ran user studies with adults, I have seen that it is more challenging to run user studies with children because they tend to be distracted, lose motivation, and forget the instructions more often. Showing them the small prizes they will get after completing the task makes them more engaged, and keeping encouraging them during the process also helps. I recently started assisting in person with my teammates, Alex Shaw and Ziyang Chen, which has definitely enhanced my knowledge of conducting a user study in a challenging context.
I am a first-year Ph.D. student at University of Florida studying Computer Science. Prior to joining the INIT Lab, I participated in designing a human detector based on deep learning in Taiwan’s IC design industry for two years. Working on the $P recognizer has introduced me to the field of touchscreen gesture recognition, and being a part of the Understanding Gestures project and HCC community has increased my knowledge of how to properly conduct research studies in Human-Centered Computing. Previous experience also inspires me to integrate deep learning algorithms into the gesture recognizers designed especially for children, which will be one of my next steps.
REFERENCES
- R. D. Vatavu, L. Anthony, and J. O. Wobbrock, “Gestures as point clouds: A $p recognizer for user interface prototypes,” ICMI’12 – Proc. ACM Int. Conf. Multimodal Interact., pp. 273–280, 2012.
- J. O. Wobbrock, A. D. Wilson, and Y. Li, “Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes,” UIST Proc. Annu. ACM Symp. User Interface Softaware Technol., pp. 159–168, 2007.
- L. Anthony and J. O. Wobbrock, “A lightweight multistroke recognizer for user interface prototypes,” Proc. – Graph. Interface, pp. 245–252, 2010.
- R. Vatavu and J. O. Wobbrock, “$ Q : A Super-Quick , Articulation-Invariant Stroke-Gesture Recognizer for Low-Resource Devices,” 2018.