Author: Yu-Peng Chen

In my previous post, I mentioned that I have been able to observe the process of running user studies with younger children and have started assisting in person with my teammates, Alex Shaw and Ziyang Chen. After observing the process of running the user studies and assisting my teammates in the user studies more than ten times, I ran the user study twice as the secondary experimenter who reads some of the instructions to the children and helps guide them through the experiment process. Actually carrying out the user study was quite different compared to simply observing and taking notes. In this post, I will go into detail about the lessons I have learned from running the user studies.

The following are my take-away messages derived from the notes I took during my observations and the experience of running user studies with young children.

Prior to the user study, we follow the INIT Lab study protocol to ensure good experiment reliability and data quality. The guide includes the following items:

  1. Confirm all documents are organized in the experiment binder, including the experiment script, and there are enough copies.
  2. Confirm study equipment is fully charged and functional.
  3. Confirm backup equipment, batteries, and/or chargers are ready for use if needed.
  4. Pre-determine study order with participants’ random identifiers.
  5. Confirm study area is properly set up for running the session.

In addition to the above check points, I find it very helpful to memorize the main steps of the study and prepare a personal checklist of to-do items. For the personal checklist, my suggestion is to break down the above general check points in a way that suits the experimenter. My personal checklist for the study is provided as an example:

Moreover, when preparing for the user study, I usually go through the tasks again to make sure that I am familiar with the process. During this process, saying the instructions out loud to myself instead of just reading them in my mind is helpful for noticing which part of the instructions on the screen is actually not easy for me to present smoothly. These preparations are tedious but necessary for me to feel confident when conducting the study.

During the user study, extra care is needed since I have seen that sometimes children do not express their feelings in a way that is easily interpretable, which might lead to inaccurate experiment results due to the children’s fatigue or frustration. For example, there were times when the children said that they don’t need a break, but showed signs of fatigue or frustration in the following task, such as placing their hands on their cheeks with a bored face or moving around in their seats. When fatigue or frustration comes into play, the children might rush through the task without paying attention, such as drawing the gestures carelessly or sorting the cards without careful thought in the card sort task from the NIH Toolbox®, which would affect the quality of our study data. Therefore, in addition to presenting the instructions clearly and smoothly, we also have to pay attention to the mood of the children by observing their facial expressions or changes in their behaviors and asking how they are feeling.

Also, based on my observation, children’s behaviors are relatively unpredictable compared with adults’ behaviors. In the studies that I have been a part of, I learned that some of the children can be very talkative while others are silent most of the time. Some were excited about the small prizes we showed them while others showed little interest in the prizes but liked to chat about their days before starting. There are still others who just need time to adjust to the new environment. Furthermore, I also noticed that children tend to forget the instructions more often than adults. In that case, the patience of the experimenter plays an important role in repeating the instruction while keeping a friendly atmosphere. The unpredictability in user studies with young children requires the experimenter to be patient, flexible, and responsive to keep the children motivated and engaged in the tasks.

I am a first-year Ph.D. student at University of Florida studying Computer Science. The challenging but rewarding experience of running user studies with children has motivated me to interact with children more. Recently I started volunteering in teaching children ages 5 to 11 basic Chinese on the weekends. Since children have always been a special population of users for new interaction technology, my goal is to improve my skills in running experiments with children by gaining more hands-on experience of interacting with this special population of users.

Read More

The cognitive development project, which is related to the Understanding Gestures project, is one of the projects in which I have been taking part since joining the INIT Lab. As introduced in the previous post,  the goal of the project is to investigate the relationship between children’s cognitive development and their interactions with touchscreens, and to use this understanding to improve recognition rates for children’s gestures. Further details can be found in the previous post.

As the first step to familiarize myself with the project, I implemented the $P Point-Cloud Recognizer [1], which is the recognizer we use in the project to compute the recognition rates of children’s gestures. Based on the results, we plan to examine the relationship between children’s scores on the cognitive and motor assessments and the accuracies of their interactions in touchscreen tasks. Implementing the $P Recognizer [1] introduced me to the field of touchscreen gesture recognition. I gained new knowledge about the recognizer, which represents gestures as unordered point clouds instead of ordered points (i.e., strokes), and algorithms for recognizing the point clouds. I also learned two different ways of evaluating a gesture recognizer, which are the user-dependent scenario and the user-independent scenario. In the user-dependent scenario, recognition rates are computed individually for each user, which means all training samples are taken from a single writer’s gestures, and the test sample is taken from that same writer. We conduct user-dependent test to examine the performance of a recognizer when trained and tested on a specific user (best case accuracy). For the user-independent scenario, which is the cases where training data for new users are unavailable, gestures from a number of writers are used for training while a single additional writer is selected for testing. We conduct the user-independent test to examine how well a recognizer can generalize to gestures by users whose gestures are not included in the training set (more realistic off-the-shelf accuracy).

The $P recognizer is a member of the larger $-family of recognizers, which includes $1 [2], $N [3], and $Q [4] recognizers. They are low-cost, easy to understand and implement recognizers designed for rapid prototyping. These recognizers achieve 98-99% recognition rates for adults, but as low as 64% for 5-year-old children. Previous work in the project has focused on extending these algorithms to achieve better recognition rates for children’s gestures. In my future work, I am hoping to investigate deep learning-based approaches for recognizing children’s gestures.

While working on the project, I have been able to observe the process of running user studies with younger children. Compared with the user research I did in my User Experience Design class, where I only ran user studies with adults, I have seen that it is more challenging to run user studies with children because they tend to be distracted, lose motivation, and forget the instructions more often.  Showing them the small prizes they will get after completing the task makes them more engaged, and keeping encouraging them during the process also helps. I recently started assisting in person with my teammates, Alex Shaw and Ziyang Chen, which has definitely enhanced my knowledge of conducting a user study in a challenging context.

I am a first-year Ph.D. student at University of Florida studying Computer Science. Prior to joining the INIT Lab, I participated in designing a human detector based on deep learning in Taiwan’s IC design industry for two years. Working on the $P recognizer has introduced me to the field of touchscreen gesture recognition, and being a part of the Understanding Gestures project and HCC community has increased my knowledge of how to properly conduct research studies in Human-Centered Computing. Previous experience also inspires me to integrate deep learning algorithms into the gesture recognizers designed especially for children, which will be one of my next steps.

 

REFERENCES

  1. R. D. Vatavu, L. Anthony, and J. O. Wobbrock, “Gestures as point clouds: A $p recognizer for user interface prototypes,” ICMI’12 – Proc. ACM Int. Conf. Multimodal Interact., pp. 273–280, 2012.
  2. J. O. Wobbrock, A. D. Wilson, and Y. Li, “Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes,” UIST Proc. Annu. ACM Symp. User Interface Softaware Technol., pp. 159–168, 2007.
  3. L. Anthony and J. O. Wobbrock, “A lightweight multistroke recognizer for user interface prototypes,” Proc. – Graph. Interface, pp. 245–252, 2010.
  4. R. Vatavu and J. O. Wobbrock, “$ Q : A Super-Quick , Articulation-Invariant Stroke-Gesture Recognizer for Low-Resource Devices,” 2018.
Read More