Category: Understanding Gestures

In my previous post, I mentioned that I have been able to observe the process of running user studies with younger children and have started assisting in person with my teammates, Alex Shaw and Ziyang Chen. After observing the process of running the user studies and assisting my teammates in the user studies more than ten times, I ran the user study twice as the secondary experimenter who reads some of the instructions to the children and helps guide them through the experiment process. Actually carrying out the user study was quite different compared to simply observing and taking notes. In this post, I will go into detail about the lessons I have learned from running the user studies.

The following are my take-away messages derived from the notes I took during my observations and the experience of running user studies with young children.

Prior to the user study, we follow the INIT Lab study protocol to ensure good experiment reliability and data quality. The guide includes the following items:

  1. Confirm all documents are organized in the experiment binder, including the experiment script, and there are enough copies.
  2. Confirm study equipment is fully charged and functional.
  3. Confirm backup equipment, batteries, and/or chargers are ready for use if needed.
  4. Pre-determine study order with participants’ random identifiers.
  5. Confirm study area is properly set up for running the session.

In addition to the above check points, I find it very helpful to memorize the main steps of the study and prepare a personal checklist of to-do items. For the personal checklist, my suggestion is to break down the above general check points in a way that suits the experimenter. My personal checklist for the study is provided as an example:

Moreover, when preparing for the user study, I usually go through the tasks again to make sure that I am familiar with the process. During this process, saying the instructions out loud to myself instead of just reading them in my mind is helpful for noticing which part of the instructions on the screen is actually not easy for me to present smoothly. These preparations are tedious but necessary for me to feel confident when conducting the study.

During the user study, extra care is needed since I have seen that sometimes children do not express their feelings in a way that is easily interpretable, which might lead to inaccurate experiment results due to the children’s fatigue or frustration. For example, there were times when the children said that they don’t need a break, but showed signs of fatigue or frustration in the following task, such as placing their hands on their cheeks with a bored face or moving around in their seats. When fatigue or frustration comes into play, the children might rush through the task without paying attention, such as drawing the gestures carelessly or sorting the cards without careful thought in the card sort task from the NIH Toolbox®, which would affect the quality of our study data. Therefore, in addition to presenting the instructions clearly and smoothly, we also have to pay attention to the mood of the children by observing their facial expressions or changes in their behaviors and asking how they are feeling.

Also, based on my observation, children’s behaviors are relatively unpredictable compared with adults’ behaviors. In the studies that I have been a part of, I learned that some of the children can be very talkative while others are silent most of the time. Some were excited about the small prizes we showed them while others showed little interest in the prizes but liked to chat about their days before starting. There are still others who just need time to adjust to the new environment. Furthermore, I also noticed that children tend to forget the instructions more often than adults. In that case, the patience of the experimenter plays an important role in repeating the instruction while keeping a friendly atmosphere. The unpredictability in user studies with young children requires the experimenter to be patient, flexible, and responsive to keep the children motivated and engaged in the tasks.

I am a first-year Ph.D. student at University of Florida studying Computer Science. The challenging but rewarding experience of running user studies with children has motivated me to interact with children more. Recently I started volunteering in teaching children ages 5 to 11 basic Chinese on the weekends. Since children have always been a special population of users for new interaction technology, my goal is to improve my skills in running experiments with children by gaining more hands-on experience of interacting with this special population of users.

Read More

The INIT lab is proud to share that PhD student Alex Shaw defended his dissertation work “Automatic Recognition of Children’s Touchscreen Stroke Gestures” earlier today! Due to the ongoing COVID-19 restrictions, we held the defense virtually. Alex’s committee members were myself (chair), Dr. Jaime Ruiz, Dr. Eakta Jain, Dr. Damon Woodard, and Dr. Pavel Antonenko (UF College of Education). Pending final comments from one committee member who will have to catch up via video due to scheduling challenges, we’d like to say congratulations, Dr. Shaw 😀

Here are two screenshots I took of the defense to document the occasion:

 

Read More

Since my last post about our study, the Cognitive Development and Touchscreen Interaction project has gone through several rounds of study recruiting and running. We have recruited our participants from many interested local families with children aging between 4 to 7 years old. In the meantime, I have been working on some high-level analysis of the data that we have collected.

In my last post, I mentioned that we often receive questions regarding the relationship between children’s cognitive development and their touchscreen interaction. Therefore, as a way to unfold and discover if such a relationship exists, we decided to calibrate some tasks through NIH Toolbox to provide another assessment metric beyond age. We employed two tasks: (1) the Dimensional Change Card Sort to evaluate the cognitive workload of our participants when completing repetitive tasks, and (2) the 9-Hole Pegboard Test to assess our participants’ fine motor skills. Completion accuracy and time are taken into consideration and four different types of scores are returned based on the participant’s demographic information. These scores are called the raw score, uncorrected score, age-corrected score, and fully corrected score. Raw score only looks at participant’s performance in terms of the completion time or the accuracy, the uncorrected score converts raw score into a comparable, normally distributed number score. The age and fully corrected score evaluate participant’s performance considering the factor of age, and all the basic demographic information respectively (i.e., education level, ethnicity, race, etc). We will link participants’ cognitive development scores back to their touchscreen interactions once we are able to come to a conclusion with more confidence.

To study the participants’ touchscreen interactions, we used apps like the ones in our lab’s published papers [1,2] to measure the participant’s gesture and target interactions. We also wanted to calculate some of the simple features mentioned in our lab’s previous publication [3]. Simple features of a gesture include the number of strokes of the gesture, the total path length of the gesture, the line similarity, total angle, sharpness of the gesture, and other geometric and temporal measurements. Employing these features and looking at how each gesture is structured may help us understand how the participant’s cognitive development level links to their gesture behaviors.

It has been a little over a year since I joined the lab and started working on this project. At first, I went through some background studies and ramped up at the beginning stage of my research. Now, I am more comfortable and confident in performing user studies than before and can better guide myself through challenges. At this point, we are continuing to receive interest from faculty with children at the University of Florida to participate in our study. Once we have reached our target number of participants, we will perform a more detailed data analysis. Our team is excited to see the outcome of this study.

Reference

[1] Julia Woodward, Alex Shaw, Annie Luc, Brittany Craig, Juthika Das, Phillip Hall, Akshay Hollay, Germaine Irwin, Danielle Sikich, Quincy Brown, and Lisa Anthony. 2016. Characterizing How Interface Complexity Affects Children’s Touchscreen Interactions. Proceedings of the ACM International Conference on Human Factors in Computing Systems (CHI ’16), ACM Press, 1921–1933.

[2] Lisa Anthony, Quincy Brown, Jaye Nias, Berthel Tate, and Shreya Mohan. 2012. Interaction and Recognition Challenges in Interpreting Children’s Touch and Gesture Input on Mobile Devices. Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, ACM Press, 225–234. http://doi.org/10.1145/2396636.2396671.

[3] Alex Shaw and Lisa Anthony. 2016. Analyzing the articulation features of children’s touchscreen gestures. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI ’16). Association for Computing Machinery, New York, NY, USA, 333–340. DOI:https://doi.org/10.1145/2993148.2993179.

Read More

The cognitive development project, which is related to the Understanding Gestures project, is one of the projects in which I have been taking part since joining the INIT Lab. As introduced in the previous post,  the goal of the project is to investigate the relationship between children’s cognitive development and their interactions with touchscreens, and to use this understanding to improve recognition rates for children’s gestures. Further details can be found in the previous post.

As the first step to familiarize myself with the project, I implemented the $P Point-Cloud Recognizer [1], which is the recognizer we use in the project to compute the recognition rates of children’s gestures. Based on the results, we plan to examine the relationship between children’s scores on the cognitive and motor assessments and the accuracies of their interactions in touchscreen tasks. Implementing the $P Recognizer [1] introduced me to the field of touchscreen gesture recognition. I gained new knowledge about the recognizer, which represents gestures as unordered point clouds instead of ordered points (i.e., strokes), and algorithms for recognizing the point clouds. I also learned two different ways of evaluating a gesture recognizer, which are the user-dependent scenario and the user-independent scenario. In the user-dependent scenario, recognition rates are computed individually for each user, which means all training samples are taken from a single writer’s gestures, and the test sample is taken from that same writer. We conduct user-dependent test to examine the performance of a recognizer when trained and tested on a specific user (best case accuracy). For the user-independent scenario, which is the cases where training data for new users are unavailable, gestures from a number of writers are used for training while a single additional writer is selected for testing. We conduct the user-independent test to examine how well a recognizer can generalize to gestures by users whose gestures are not included in the training set (more realistic off-the-shelf accuracy).

The $P recognizer is a member of the larger $-family of recognizers, which includes $1 [2], $N [3], and $Q [4] recognizers. They are low-cost, easy to understand and implement recognizers designed for rapid prototyping. These recognizers achieve 98-99% recognition rates for adults, but as low as 64% for 5-year-old children. Previous work in the project has focused on extending these algorithms to achieve better recognition rates for children’s gestures. In my future work, I am hoping to investigate deep learning-based approaches for recognizing children’s gestures.

While working on the project, I have been able to observe the process of running user studies with younger children. Compared with the user research I did in my User Experience Design class, where I only ran user studies with adults, I have seen that it is more challenging to run user studies with children because they tend to be distracted, lose motivation, and forget the instructions more often.  Showing them the small prizes they will get after completing the task makes them more engaged, and keeping encouraging them during the process also helps. I recently started assisting in person with my teammates, Alex Shaw and Ziyang Chen, which has definitely enhanced my knowledge of conducting a user study in a challenging context.

I am a first-year Ph.D. student at University of Florida studying Computer Science. Prior to joining the INIT Lab, I participated in designing a human detector based on deep learning in Taiwan’s IC design industry for two years. Working on the $P recognizer has introduced me to the field of touchscreen gesture recognition, and being a part of the Understanding Gestures project and HCC community has increased my knowledge of how to properly conduct research studies in Human-Centered Computing. Previous experience also inspires me to integrate deep learning algorithms into the gesture recognizers designed especially for children, which will be one of my next steps.

 

REFERENCES

  1. R. D. Vatavu, L. Anthony, and J. O. Wobbrock, “Gestures as point clouds: A $p recognizer for user interface prototypes,” ICMI’12 – Proc. ACM Int. Conf. Multimodal Interact., pp. 273–280, 2012.
  2. J. O. Wobbrock, A. D. Wilson, and Y. Li, “Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes,” UIST Proc. Annu. ACM Symp. User Interface Softaware Technol., pp. 159–168, 2007.
  3. L. Anthony and J. O. Wobbrock, “A lightweight multistroke recognizer for user interface prototypes,” Proc. – Graph. Interface, pp. 245–252, 2010.
  4. R. Vatavu and J. O. Wobbrock, “$ Q : A Super-Quick , Articulation-Invariant Stroke-Gesture Recognizer for Low-Resource Devices,” 2018.
Read More

Over the course of the last few weeks, I had my first experience of running a user study with younger children, particularly children age 6 to 7 from PK Yonge Blue Wave After School program. My PhD mentor, Alex Shaw, and I went through a week of recruitment and a week of study running at the PK Yonge facility. To me, the recruiting process was quite interesting. At first, I wasn’t exactly sure how to approach potential participants’ parents and give a concise introduction about our study. Knowing that I had to spark parents’ interest in our study without being overly aggressive, I observed how my mentor, Alex, carried out the recruiting process and adapted his techniques. I felt that highlighting the potential benefits, emphasizing the low-risk nature of the research study, and pointing out the timeliness of our study are the three factors that most effectively encouraged parents to allow their children to participate. Another thing I learned is that recruiting is a lengthy process. I was little disappointed at first to only receive few responses back, but being patient and keeping a positive attitude during recruitment eventually gave parents enough time to return the consent forms.

On the other hand, running the study was quite challenging for me at first. Prior to the actual user study, Alex and I ran two pilot studies with other members of the INIT lab, but the real deal was little different. I was nervous during the first study, mumbled my words through and made a mistake on the data entrance: I stopped the timer before the participant finished and submitted the wrong time. Luckily, we have the study recorded and I was able to go through the timestamps and save the data manually. I became much more comfortable and the study process was much smoother in the later studies.

A few things I learned from this study include: since we are running the study with younger children, it is important to clearly present the instructions and make sure the participants fully understand them. I realized that during the fine motor skills assessment from the NIH Toolbox®, the participants tended to use both hands or the wrong hand while the instructions said not to. So, I made sure to emphasize those parts of the instruction to keep the data as accurate as possible. Also, I learned that younger children get bored easily: since the study lasts around 30 minutes and involves many repetitive tasks, there are certainly times when fatigue comes into play and children want to quit the study. In order to avoid those situations as much as possible, I found that spending a few minutes when taking the children to our study room asking how their day is going, asking them a few questions to keep them engaged, and showing them the small prizes we will give out can get them more engaged and keep them excited to participate. Encouraging breaks in between study tasks and keeping a friendly atmosphere during the study also helps.

Overall, I felt the recruiting and the study running process was challenging at first, but it became much easier after the first few times and I actually enjoyed the process. Looking at the data we’ve collected also gives me a sense of accomplishment. Our next step is to analyze the data to answer our study’s research questions, which we will be able to talk about soon. We are also planning to conduct study sessions with younger children at the Baby Gator daycare facility. I’m excited and looking forward for the process.

Read More

Since I joined the INIT lab, I have been working on preparing a study related to the Understanding Gestures project. The goal of the project is to examine the relationship between previous findings about children’s touch and gesture interactions and their cognitive development. Our lab’s previous work has shown that children’s gestures are not recognized as accurately as adults’ gestures and that there are significant differences in articulation features related to gesture production time and geometry. We have received inquiries from readers of our prior publications regarding the cognitive development of the children we collected data from, which led us to pursue this project on understanding how children’s cognitive development is related to the way they interact with touchscreen devices. We believe having this new information will help us gain a more comprehensive understanding of children’s touchscreen interactions.

Cognitive development is a field of study in neuroscience and psychology focusing on a children’s development in terms of information processing, problem solving, and decision making [1]. In our Understanding Gestures project, we are mainly concerned with children’s fine motor skills and children’s executive function, both of which exhibit variance across early ages of childhood and between genders. Fine motor skill measures the coordination of small muscles such as those in the finger and hand [2]. Executive function measures the ability to focus attention and execute tasks [3]. We plan on measuring these two aspects using NIH Toolbox®, a “comprehensive set of neuro-behavioral measurements that quickly assesses cognitive, emotional, sensory, and motor functions” [4]. The creators of the app, the National Institutes of Health, maintain a representative database for comparing children’s performance on the tasks based on their demographic information (e.g., age, gender, etc.). We are excited to be collaborating on this project with Dr. Pavlo Antonenko from the College of Education. We are looking forward to drawing connections between children’s touchscreen interactions and their cognitive development from this study.

I am a third-year undergraduate student majoring in Computer Science, and this is my first full semester in the INIT Lab. The process of preparing a study has been challenging but very interesting. I have always wanted to learn how to run a study and been curious about the work that goes into a research paper. As we prepare for the study, I have performed in-depth independent research on potential topics of exploration regarding children’s cognitive development. I have gained a great sense of accomplishment by playing a role in building the study from scratch, and I am looking forward to continuing my work on the study.

 

REFERENCES

1. Ali, Ajmol & Pigou,Schacter, Daniel L (2009). PSYCHOLOGY. Catherine Woods. p. 429. ISBN 978-1-4292-3719-2.

2. Deborah & Clarke, Linda & Mclachlan, Claire. (2017). Review on Motor Skill and Physical Activity in Preschool Children in New Zealand. Advances in Physical Education. 7. 10-26. 10.4236/ape.2017.71002.

3. Team, Understood. “Understanding Executive Functioning Issues.” Understood.org, www.understood.org/en/learning-attention-issues/child-learning-disabilities/executive-functioning- issues/understanding-executive-functioning-issues.

4. Weintraub, Sandra et al. “Cognition assessment using the NIH Toolbox.” Neurology vol. 80,11 Suppl 3 (2013): S54-64. doi:10.1212/WNL.0b013e3182872ded

Read More

Over the past months, I have continued my work on the understanding gestures project by working on developing a set of new articulation features based on how children make touchscreen gestures. Our prior work has shown that children’s gestures are not recognized as well as adults’ gestures, which led us to perform further investigation on how children’s gestures differ from those of adults. In one of our studies, we computed the values of 22 existing articulation features to improve our understanding. An articulation feature is a quantitative measure of some aspect of the way the user creates the gesture. These features are generally either geometric (such as the total amount of ink used or the or the area of the bounding box surrounding the gesture) or temporal (such as the total time taken to produce the gesture or the average speed). In that paper, we showed there was a significant effect of age on the values of many of the features, illustrating differences between children’s and adults’ gestures.

Though we found many differences between children and adults’ gestures, I noticed there were several behaviors that were often present in children’s gestures which were not captured by the features we had used. For example, children’s gestures often do not connect the endpoints of their strokes as well as adults do, as shown in the following “Q” gesture produced by a 5-year-old in one of our studies:

I developed a list of several behaviors like this one that I wanted to capture as new articulation features. For this blog post, I’ll focus on the feature measuring the distance between endpoints of strokes that should be connected, which I’ll call joining error. Using the “D” gesture as an example, the value we would want to compute is the total distance indicated by the orange line below:


To compute this feature, my first idea was to develop an algorithm to detect which ends of strokes should be joined, then measure the distance between them. However, even though we know what the gestures should look like, creating an algorithm to measure this feature would be a difficult computer vision problem. I thought I could look at the distance between points and then assume that if the distance between them was less than a threshold, they should be joined. However, this doesn’t work in all cases. What if some endpoints are less than the threshold, but not supposed to be joined? Many of the features we wanted to compute required similar challenges making them different to design an algorithm for.

Despite this difficulty, I realized that I could just easily look at any gesture, see the joining errors, and mark the distance I wanted to measure. Therefore, I decided to manually annotate all the gestures to calculate the new features. Because there were more than 20,000 gestures, I needed to develop a tool to help me complete the annotations in a timely manner.

I created a tool that plots out all the gestures in a given set and allows me to click to mark the features I’m interested in. The program detects the size of the screen and determines how many gestures to put on the screen. Then I can click each pair of endpoints that should be joined for measuring joining error, and my software automatically logs the distance between the points as well as information about the gesture. The following shows a mockup of the program I developed:


I was able to annotate five different features of over 20,000 gestures using my tool in a few weeks, whereas if I had manually examined each gesture individually, it would have probably taken several months. Furthermore, since I was visually inspecting each gesture, I had confidence that I was measuring exactly the quantity I wanted. Working on this project has helped me learn how important it can be to create tools for streamlining work requiring repeated manual intervention.

Read More

This summer, I have been working on a project related to the $-family of gesture recognizers. The $-family is a series of simple, fast, and accurate gesture recognizers designed to be accessible to novice programmers. $1 [1] was created by Wobbrock and colleagues, and INIT Lab director Lisa Anthony contributed to later algorithms, including $N [2] and $P [3]. My goal this summer was to implement my own versions of the $-family algorithms, and then to try them out on a new dataset that was collected from adults and children in a different context than previous datasets collected by the INIT lab.

The first step of my work on this project was to understand how the different algorithms of the $-family work. I examined the advantages and limitations of each recognizer in the $-family by reading the related research papers and playing around with existing implementations of the recognizers. After studying the recognizers, I created my own implementations of $1 and $P in Javascript by making a web application. I faced several challenges when implementing these algorithms. My first challenge was to decide in what form the gestures to be recognized should be taken as input (predefined point arrays or through a canvas where user-defined gestures can be given as input). Using this $1 implementation as a reference, I normalized each of the gestures, then computed the distance between the gestures and performed user-defined gesture recognition through a canvas. While implementing the algorithms, I followed a step by step approach so that I could evaluate whether each function was working before moving forward with recognition. In the process, I learned the importance of debugging the program to help pinpoint errors in my code more efficiently than trying to find the problems manually.

After completing the web applications, my next task was to recognize gestures from a dataset with XML files as input. I created another implementation of the $1 recognizer in Python to learn and explore another programming language. I was initially unsure how to read in the gesture data from XML files so I had to learn how to parse them. I used the pseudo code presented in the original $1 paper [1] as a guide to implement the algorithm. Resampling the points of the gesture before recognition was challenging. Every gesture needed to have the same number of resampled points for recognition. To solve the issues I encountered while preprocessing the gestures, I plotted the gestures using the matplotlib library from Python. Not only did visualising gestures help in that context, it also helped me to understand why some gestures were wrongly recognized, since they looked more like the other gestures than what they actually were. Solving these errors and getting a correct implementation gave me a great sense of achievement. After implementing the recognition algorithms, I learned how to run user-independent recognition experiments where I systematically varied the number of participants included in the training set. Then I ran those experiments to find out the accuracy of the algorithms that I implemented. Now, I am working on analyzing articulation features [4] [5] of a new set of gestures to help quantitatively investigate the difference between adult’s and children’s gestures in a new context.

I am a final year undergraduate computer science student from MIT, Pune, India working with the INIT lab as an REU student this summer, as part of the UF CISE IMHCI REU program. I have greatly enjoyed my time working in the INIT lab. One thing I have really enjoyed while I’ve been here is related to another project I worked on: interacting with an ocean temperature application on the PufferSphere, which is a large interactive spherical display. Through my experience in the INIT lab, I have been able to closely follow the different stages of the research process. I’ve added to my technical knowledge through improved understanding of gesture recognizers, and I’ve also learned the importance of being clear and concise in scientific writing. I am looking forward to continuing my work on this project and understanding new ways to improve children’s gesture interaction experiences.

References:

[1] Wobbrock, Jacob O., Andrew D. Wilson, and Yang Li. “Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes.” Proceedings of the 20th annual ACM symposium on User interface software and technology. ACM, 2007.

[2] Anthony, Lisa, and Jacob O. Wobbrock. “A lightweight multistroke recognizer for user interface prototypes.” Proceedings of Graphics Interface 2010. Canadian Information Processing Society, 2010.

[3] Vatavu, Radu-Daniel, Lisa Anthony, and Jacob O. Wobbrock. “Gestures as point clouds: a $ P recognizer for user interface prototypes.” Proceedings of the 14th ACM international conference on Multimodal interaction. ACM, 2012.

[4] Anthony, Lisa, Radu-Daniel Vatavu, and Jacob O. Wobbrock. “Understanding the consistency of users’ pen and finger stroke gesture articulation.” Proceedings of Graphics Interface 2013. Canadian Information Processing Society, 2013.

[5] Shaw, Alex, and Lisa Anthony. “Analyzing the articulation features of children’s touchscreen gestures.” Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 2016.

Read More

In our last post, we shared that we had a paper accepted to the ACM International Conference on Multimodal Interaction (ICMI) 2017, to be held in Glasgow, Scotland, UK. The paper was titled “Comparing Human and Machine Recognition of Children’s Touchscreen Gestures.” We just came back from the conference and are proud to announce that Alex Shaw, the INIT Lab PhD student who first-authored the paper, won Best Student Paper at the conference! Alex is co-advised by Dr. Lisa Anthony from the INIT lab and UF CISE professor Dr. Jaime Ruiz.

Congratulations, Alex!

Read More

In a previous post, we discussed our ongoing work on studying children’s gestures. To get a better idea of the target accuracy for continuing work in gesture recognition, we ran a study comparing human ability to recognize children’s gestures to machine recognition. Our paper, “Comparing Human and Machine Recognition of Children’s Touchscreen Gestures”, quantifies how well children’s gestures were recognized by human viewers and by an automated recognition algorithm. This paper includes our project team: me (Alex Shaw), Dr. Jaime Ruiz, and Dr. Lisa Anthony. The abstract of the paper is as follows:

Children’s touchscreen stroke gestures are poorly recognized by existing recognition algorithms, especially compared to adults’ gestures. It seems clear that improved recognition is necessary, but how much is realistic? Human recognition rates may be a good starting point, but no prior work exists establishing an empirical threshold for a target accuracy in recognizing children’s gestures based on human recognition. To this end, we present a crowdsourcing study in which naïve adult viewers recruited via Amazon Mechanical Turk were asked to classify gestures produced by 5- to 10-year-old children. We found a significant difference between human (90.60%) and machine (84.14%) recognition accuracy, over all ages. We also found significant differences between human and machine recognition of gestures of different types: humans perform much better than machines do on letters and numbers versus symbols and shapes. We provide an empirical measure of the accuracy that future machine recognition should aim for, as well as a guide for which categories of gestures have the most room for improvement in automated recognition. Our findings will inform future work on recognition of children’s gestures and improving applications for children.

The camera-ready version of the paper is available here. We will present the paper at the upcoming ACM International Conference on Multimodal Interaction in Glasgow, Scotland. We will post our presentation slides after the conference. See more information at our project website.

Read More