Category: Pose Project

In previous posts, we have discussed our ongoing work on understanding the differences between child and adult motions to improve recognition of children’s motions. My paper, “Tailoring Motion Recognition Systems to Children’s Motions”, was accepted to the 2019 International Conference on Multimodal Interaction (ICMI) Doctoral Consortium! The doctoral consortium provides an opportunity for PhD students who are at the stage of proposing their dissertation to share their dissertation plans with outside researchers and receive feedback. The paper focuses on my ongoing work and future research plans for my doctoral dissertation. Here is the abstract:

Motion-based applications are becoming increasingly popular among children and require accurate motion recognition to ensure meaningful interactive experiences. However, motion recognizers are usually trained on adults’ motions. Children and adults differ in terms of their body proportions and stages of development of their neuromuscular systems, so children and adults will likely perform motions differently. Therefore, motion recognizers tailored to adults will likely perform poorly for children. My PhD thesis will focus on identifying features that characterize children’s and adults’ motions. This set of features will provide a model that can be used to understand children’s natural motion qualities and will serve as the first step in tailoring recognizers to children’s motions. This paper describes my past and ongoing work toward this end and outlines the next steps in my PhD work.

We will post the camera-ready version as soon as it is available. The ICMI 2019 conference will be held in Suzhou, China in October.

I am a fourth year PhD student at UF. My participation at the ICMI doctoral consortium will allow me to present my research ideas, receive feedback from researchers with varied experience and expertise in research, and network with peers and mentors, who can help with my development, both academically and professionally.

Read More

In a previous post from a few years ago, we mentioned that our findings on the Pose project established that there were perceivable differences between child and adult motion. Our next steps were to quantify what these differences actually were. As a first step to investigating these quantifiable characteristics, we concentrated on temporal and spatial features commonly utilized in the analysis of gait (i.e., walking and running) to analyze the walking (walk in place, walk in place as fast as you can) and running (run in place, run in place as fast as you can) actions in our Kinder-Gator dataset. Our paper presenting this analysis, titled “Quantifying Differences between Child and Adult Motion using Gait Features,” was accepted as an invited paper to HCII 2019: the International Conference on Human-Computer Interaction. The paper details our analysis of nine features with respect to age group (child vs. adult) and actions (walk, walk fast, run, and run fast) and the implications of our results with respect to the design of whole-body interaction prompts and improvement of recognizers for whole-body motions. Here is the abstract:

 

Previous work has shown that motion performed by children is perceivably different from that performed by adults. What exactly is being perceived has not been identified: what are the quantifiable differences between child and adult motion for different actions? In this paper, we used data captured with the Microsoft Kinect from 10 children (ages 5 to 9) and 10 adults performing four dynamic actions (walk in place, walk in place as fast as you can, run in place, run in place as fast as you can). We computed spatial and temporal features of these motions from gait analysis, and found that temporal features such as step time, cycle time, cycle frequency, and cadence are different in the motion of children compared to that of adults. Children moved faster and completed more steps in the same time as adults. We discuss implications of our results for improving whole-body interaction experiences for children.

 

Interested readers can find the camera-ready version (preprint) available here. The HCII 2019 conference will take place in Orlando, Florida from July 26 – July 31 during which I will be presenting the paper.

Working on this paper advanced my knowledge of the analysis of gait, and improved my understanding of human movement for both children and adults. I am looking forward to presenting the paper at the conference as the conference will provide an avenue to gain valuable feedback from the audience regarding the conclusions of the paper.

Read More

In our previous post, we mentioned that we published the Kinder-Gator dataset, which contains the motions of 10 children and 10 adults performing motions in front of the Kinect.  Currently, we are exploring recognition of whole-body motions in the dataset. Since we are focusing on whole-body motions, we would like to concentrate on motions in which movement involves one or more limbs in the body. Hence, we only use a subset of the motions in Kinder-Gator, since it also includes motions that just involve hand motion or body poses. To test the performance of their $1 unistroke gesture recognition algorithm; an algorithm designed to help incorporate stroke gestures into games and UI prototypes, Wobbrock et al. [1] defined a representative set of 16 unistroke gestures (e.g., a triangle, an X) that are useful for these applications. Similarly, we want to define a representative set of motions that encompasses the unique combinations of upper and lower limb movements in our dataset. This representative set will be used to evaluate the performance of the recognition algorithms we are currently exploring for whole-body motions. In this blog post, we discuss the steps we are taking to define the representative subset from motions in Kinder-Gator:

  1. EXCLUSION: We are excluding motions that involve drawing shapes and symbols, and motions that involve making symbols and shapes with the body that Kinder-Gator includes. We are excluding the drawing motions because these motions usually involve the movement of the hand or wrist while the rest of the body remains static. Hence, these motions are not good representatives of motions involving the whole-body. Furthermore, we are excluding these motions because they are intended for the recognition of the shape or symbol being performed, rather than the recognition of the motion in its entirety. Examples of motions from the Kinder-Gator dataset that fall within this category include: “Draw the letter A in the air” and “Make the letter X with your body”.
  2. CHARACTERIZATION: As mentioned earlier, we want our representative subset to be unique in terms of upper and lower limb movement. Hence, in this step, we are characterizing motions in terms of the dimensions of movement of the upper and lower limb, and we exclude motions that are too similar in their dimensions of movement, to avoid collisions. To accomplish this, first, we are excluding motions that are mirrors of other motions, since the motions being performed are the same, just with the opposite limb. For example, the motion ‘wave your other hand’ is a mirror of the motion ‘wave your hand’ so we exclude the mirror. Next, we are characterizing the movement of joints in the upper limb (hand and shoulder) and lower limb (knee and foot) along the horizontal x, vertical y, and depth dimensions. By doing this, we expect to identify motions that are similar in their upper and lower limb movement for the next step.
  3. SELECTION: Finally, to identify the final representative subset of motions, we are grouping motions that are similar based on the characterization in the previous step. That is, we are grouping motions that are similar in terms of their upper and lower limb movement. The groupings resulted in 16 groups wherein each group contained a unique combination of upper and lower limb movement. Afterward, we will choose one motion from each group to form the representative subset of motion. Our next step is to use these motions to test the performance of existing recognition algorithms, and then adaptations or new algorithms as well.

Working on the POSE project has been very interesting and has allowed me to gain a better understanding of recognition algorithms. I look forward to gaining more knowledge as I progress further in the project.

REFERENCES

  1. Wobbrock, Jacob O., Andrew D. Wilson, and Yang Li. “Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes.” Proceedings of the 20th annual ACM symposium on User interface software and technology. ACM, 2007.
  2. Anthony, L., & Wobbrock, J. O. (2010, May). A lightweight multistroke recognizer for user interface prototypes. In Proceedings of Graphics Interface 2010 (pp. 245-252). Canadian Information Processing Society.
Read More

In a previous post, we discussed conducting a study in which we used the Kinect to track the motions of ten children and ten adults performing whole-body gestures, for example, wave your hand and jumping jacks. From this study, we created a dataset of the whole-body gestures. Our paper titled, “Kinder-Gator: The UF Kinect dataset of Child and Adults Motions,” was accepted as a short paper to the Eurographics 2018 conference; a premiere conference that showcases innovative research in computer graphics. The paper details the gestures in the dataset, the data collection, and example applications of the dataset in animation, recognition, and human motion characteristics. Here is the abstract:

Research has suggested that children’s whole-body motions are different from those of adults. However, research on children’s motions, and how these motions differ from those of adults, is limited. One possible reason for this limited research is that there are few motion capture (mocap) datasets for children, with most datasets focusing on adults instead. There are even fewer datasets that have both children’s and adults’ motions to allow for comparison between them. To address these problems, we present Kinder-Gator, a new dataset of ten children and ten adults performing whole-body motions in front of the Kinect v1.0. The data contains RGB and 3D joint positions for 58 motions, such as wave, walk in place, kick, and point, which have been manually labeled according to the category of the participant (child vs. adult), and the motion being performed. We believe this dataset will be useful in supporting research and applications in animation and whole-body motion recognition and interaction.

Interested readers can find the camera-ready version of the paper (preprint) available here. I presented the paper at the conference which took place in Delft, Netherlands.

This was my first time presenting at any conference and at first, I felt very nervous about the idea of giving a conference talk. However, after practicing repeatedly in front of my advisors and peers, I became more confident and gave a successful talk at the conference. It was also my first time attending the Eurographics conference and visiting Delft, Netherlands. Overall, I really enjoyed interacting with other researchers and enjoyed listening to talks about the state of the art research in the graphics community. I also loved that the conference included a city tour in which I got to learn about the history of Delft and the role it played in the history of the Netherlands.

Read More

The INIT Lab Kids Pose project has had a poster paper accepted to the upcoming Eurographics conference, to be held in Delft, The Netherlands, in April 2018. This project is a collaboration with the Jain Lab, directed by Dr. Eakta Jain, also at UF CISE, and focuses on understanding the differences and similarities between child and adult motion. In this poster, we demonstrate the results of a technique to translate adult motion into “child-like” motion for use in animation and graphics applications. Our abstract is below:

Animated child characters are increasingly important for educational and entertainment content geared towards younger users. While motion capture technology creates realistic and believable motion for adult characters, the same type of data is hard to collect for young children. We aim to algorithmically transform adult motion capture data to look child-like. We implemented a warping based style translation algorithm, and show the results when this algorithm is applied to adult to child transformation.

Check out our camera ready information and the project page with further info here!

Read More

In our previous post, we mentioned that our paper “is the motion of a child perceivably different from the motion of an adult?” will be published in the Transactions on Applied Perception (TAP) journal. The paper focused on investigating if naïve viewers can tell the difference between adult and child motion through a two-alternative forced choice survey. We found that naïve viewers can identify a motion belonging to a child versus that belonging to an adult significantly above chance levels. Furthermore, we found that the type of action (e.g., jumping jacks, walking, running) being performed affects the accuracy of people’s perceptions of children or adults, possibly due to coordination or other cues. From these findings, we want to investigate what are the quantifiable characteristics of the motions that can explain the differences between perception of child and adult motions.

Working on the POSE project has been an exciting and informative experience for me in the first year of my PhD program. It has introduced me to the rich field of whole body interaction, which has become the focus of my research. I also plan to apply this knowledge as I dig deeper into the field of whole body interaction and movement-based games for my thesis work.

Read More

In our last post, we shared the news that our POSE project paper “Is the motion of a child perceivably different than the motion of an adult?” was accepted to the journal, ACM Transactions on Applied Perception. We had actually submitted the paper to the ACM Symposium on Applied Perception, and ours was one of 4 top papers selected to be recommended for publication in TAP.

We also got to present the paper at the symposium itself. I’ve recently returned from SAP and I very much enjoyed my first SIGGRAPH-related event! There were a lot of papers on virtual or augmented reality and understanding human perceptual abilities and limitations so that we can design better future interactions and graphics output. Our paper was one of the only ones on whole-body interaction. For a storify of my tweets during the SAP event, click here.

Click here for a link to our paper, our presentation slides, and the poster we made to be displayed at SIGGRAPH. For more information about our project, check out the project page here!

Read More

In a previous post, we talked about a project in which we were using the Kinect to track the motion of children and adults. We took the motion we captured and conducted an applied perception study, which we are pleased to announce has been accepted for publication into ACM Transactions on Applied Perception. Our paper, “Is the motion of a child perceivably different from the motion of an adult?” reports that indeed participants can successfully identify motion as belonging to a child or an adult, and discusses some possible cues participants may be using. This paper includes our project team: me (Dr. Lisa Anthony), first author Dr. Eakta Jain, and several of our current and former students: Aishat Aloba, Amanda Castonguay, Isabella Cuba, Alex Shaw, and Julia Woodward. The abstract of the paper is as follows:

Artists and animators have observed that children’s movements are quite different from adults performing the same action. Previous computer graphics research on human motion has primarily focused on adult motion. There are open questions as to how different child motion actually is, and whether the differences will actually impact animation and interaction. We report the first explicit study of the perception of child motion (ages 5 to 9 years old), compared to analogous adult motion. We used markerless motion capture to collect an exploratory corpus of child and adult motion, and conducted a perceptual study with point light displays to discover whether naive viewers could identify a motion as belonging to a child or an adult. We find that people are generally successful at this task. This work has implications for creating more engaging and realistic avatars for games, online social media, and animated videos and movies.

The camera-ready version of the paper is available here. We will get to present the paper at the upcoming ACM Symposium on Applied Perception. The conference is coming up in Anaheim, CA, in late July, co-located with ACM SIGGRAPH. Stay tuned for our presentation slides to be posted after the conference. See more information at our project website.

Read More

One of the projects our lab has been working on is the POSE project. This project is a transition from the FunFitTech project where we were more interested with creating a proof of concept prototype that would motivate kids to play exercise. The POSE project focuses on looking for differences in how children and adults do these gestures. We are currently conducting a study on adults and children in which we are using the Kinect to track the motion of participants when performing gestures such as walking and running. This project is a collaboration with Dr Eakta Jain’s group jainlab.cise.ufl.edu.

I am a first-year Ph.D. student who joined the INIT lab this semester. My research interests include child-centered computing, machine learning, human-centered computing, and app development. The POSE project has helped in enhancing my interests in kids and HCC. I’ve learned that running studies with children is challenging because one needs to find the right approach to keep the children motivated during studies. It has been a great experience working on this project, and I am looking forward to continuing.

Read More