In previous posts, we noted that the goal of the POSE project is to understand how children perform motions (i.e., their motion qualities) so we can improve automatic recognition of children’s motions. To accomplish this goal, we began by investigating the variations in how users move their body parts during the performance of a motion. In our previous post, we presented the design and evaluation of the filterJoint method, which facilitated this investigation. This method selects the key body parts that users are actively moving when performing motions, revealing new insights about how users perform motions (see the previous post for more details). Recently, we have been focusing on quantifying the differences between children’s and adults’ motions using articulation features. These features can quantitatively describe how a user performs motions independently and also how they perform motions relative to other users in the same or different age groups (i.e., child vs. adult). Our paper presenting the results from analyzing children’s and adults’ motions using these features, titled “Characterizing Children’s Motion Qualities: Implications for the Design of Motion Applications for Children” was accepted to the ACM International Conference on Multimodal Interaction (ICMI) conference for 2021! The paper details what we found about the differences between children’s and adults’ motion performances. Here is the abstract:
The goal of this paper is to understand differences between children’s and adults’ motions in order to improve future motion recognition algorithms for children. Motion-based applications are becoming increasingly popular among children (e.g., games). These applications often rely on accurate recognition of users’ motions to create meaningful interactive experiences. Motion recognition systems are usually trained on adults’ motions. However, prior work has shown that children move differently from adults. Therefore, these systems will likely perform poorly on children’s motions, negatively impacting their interactive experiences. Although prior work has established that there are perceivable differences between child and adult motion, these differences are yet to be quantified. If we can quantify these differences, then we can gain new insights about how children perform motions (i.e., their motion qualities). We present 24 articulation features (11 of which we newly developed) that describe motions quantitatively; we then evaluate them on a subset of child and adult motions from the publicly available Kinder-Gator dataset to reveal differences; motions in this dataset are represented as postures, each of which is defined by 3D positions of 20 joints tracked by a Kinect at a specific time instance. Our results showed that children perform motions that are quantifiably faster, more intense, less smooth, and less coordinated as compared to adults. Based on our results, we propose guidelines for improving motion recognition algorithms and designing motion applications for children.
Interested readers can find the camera-ready version (preprint) available here. The ICMI conference will be a hybrid event because of the Coronavirus pandemic. The conference will be held from October 18th – October 22nd in Montreal, Canada (for those who want to attend physically).
The work presented in this paper concludes my dissertation work and my PhD journey. Throughout my PhD, I have been fortunate to have the support of my advisor, Dr. Lisa Anthony, my supervisory committee members, my lab mates, my peers in the department, and my family. I feel excited to start my next journey working as a User Experience (UX) Researcher at Facebook. I will be attending the ICMI conference virtually and am looking forward to discussing my paper at this premier conference, known for providing an avenue to learn about state-of-the-art research in multimodal interaction as well as opportunities to network with both junior and senior researchers in the field.