For five years, the INIT lab (and our past and present collaborators!) was engaged in an NSF-funded research project to study physical dimensions of children’s touchscreen interaction use, e.g., what happens when they try to acquire onscreen targets or make onscreen gestures. The project, called “Mobile Touch and Gesture Interaction for Children,” or “MTAGIC” (magic) for short, ended in August 2017. Recently, as PI of the project, I have published a retrospective article that synthesizes our findings across the six studies we ran for this project and identified elements that were consistent or varied across contexts, and the article is now available online at the International Journal of Human-Computer Studies (IJHCS). The full article title is “Physical Dimensions of Children’s Touchscreen Interactions: Lessons from Five Years of Study on the MTAGIC Project,” and one part I am particularly keen to see addressed in future work extending our work are three open areas of research that I identify in Section 5: (1) children’s interaction with emerging technologies like bendable displays and spherical displays; (2) support for children with disabilities; and (3) children’s interactions in multiple simultaneous modalities like speech and gesture together. Here is the abstract of the paper:
Touchscreen interaction is nearly ubiquitous in today’s computing environments. Children have always been a special population of users for new interaction technology: significantly different from adults in their needs, expectations, and abilities, but rarely tailored to in new contexts and on new platforms. Studies of children’s touchscreen interaction have been conducted that focus on individual variables that may affect the interaction, but as yet no synthesis of studies replicating similar methodologies in different contexts has been presented. This paper reports the results across five years of focused study in one project aiming to characterize the differences between children’s and adults’ physical touchscreen interaction behaviors. Six studies were conducted with over 180 people (116 children) to understand how children touch targets and make onscreen gestures. A set of design recommendations that summarizes the findings across the six studies is presented for reference. This paper makes the entire set available for reference in one place and highlights where the findings are generalizable across platforms. These recommendations can inform the design of future touchscreen interfaces for children based on their physical capabilities. Also, this paper outlines the future challenges and open questions that remain for understanding child-computer interaction on touchscreens.
Download the preprint here, or check out the journal’s definitive version. For those interested in this space, the cumulative set of 24 design recommendations from the five years of the MTAGIC project are available for download here.Read More
We are proud to be able to say that our lab has had a paper accepted to the upcoming ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW) 2018 conference! This paper presents an analysis of children interacting around a large touchscreen tabletop display, in particular examining some previously proposed design recommendations for how to support effective collaboration in this context. INIT Lab former student Julia Woodward conducted this study with her co-authors while she was an undergraduate research assistant in our lab. She is now a Human-Centered Computing (HCC) PhD student working with Dr. Jaime Ruiz in the Ruiz HCI Lab.
Here is the abstract:
Prior work has shown that children exhibit negative collaborative behaviors, such as blocking others’ access to objects, when collaborating on interactive tabletop computers. We implemented previous design recommendations, namely separate physical territories and activity roles, which had been recommended to decrease these negative collaborative behaviors. We developed a multi-touch “I-Spy” picture searching application with separate territory partitions and activity roles. We conducted a deep qualitative analysis of how six pairs of children, ages 6 to 10, interacted with the application. Our analysis revealed that the collaboration styles diﬀered for each pair, both in regards to the interaction with the task and with each other. Several pairs exhibited negative physical and verbal collaborative behaviors, such as nudging each other out of the way. Based on our analysis, we suggest that it is important for a collaborative task to oﬀer equal opportunities for interaction, but it may not be necessary to strive for complete equity of collaboration. We examine the applicability of prior design guidelines and suggest open questions for future research to inform the design of tabletop applications to support collaboration for children.
You can download the camera-ready PDF here. Julia will be presenting the paper in November in Jersey City, NJ.Read More
And now for something a little different! The INIT Lab has long been conducting research on how children’s physical capabilities (e.g., motor skills development) affects their interactions with touchscreen devices like iPads and smartphones. Other researchers, like Alexis Hiniker and her former advisor Julie A. Kientz, both at the University of Washington in the DUB group, have been examining how children’s cognitive development impacts those interactions. We teamed up to write a magazine article for the UXPA Magazine (User Experience Professionals Association) to help get our research findings in the hands of practitioners! We are excited to announce that the article is now live on the UXPA site.
The abstract (first paragraph) is here:
Practicing designers can tell you that designing mobile touchscreen apps for children is different than for adult users. But what does science tell us about what interface differences are critical to remember? We are engaged in the science of child-computer interaction. Our empirical research has focused on capturing how the cognitive and physical traits of young children under age 10 affect the success of their interactions with touchscreen interfaces. We, and others, have produced research-driven design recommendations to consider. We share here our top seven guidelines for designing for children under age 10 and the evidence that led to them.
The article link can be found here. We are also releasing a two-page supplemental bibliography with this article as a separate download. The UXPA article format is more magazine-style and does not allow references, but we wanted to make sure that people could find the exciting research that we synthesized for this article if they were interested. Check it out here. If you’re a practitioner and you’ve found this article useful in your work to design technology for kids, we’d love to hear from you! Contact me.Read More
We are pleased to share that our paper “Using Co-Design to Examine How Children Conceptualize Intelligent Interfaces” has been accepted to the upcoming ACM SIGCHI 2018 conference, to be held in April in Montreal, Canada! The first author is our own former undergraduate star, Julia Woodward, who is now a PhD student in Human-Centered Computing (HCC) working for Dr. Jaime Ruiz at UF. This paper was a collaboration with the University of Washington “Kidsteam” and its director, Dr. Jason C. Yip.
Here is the abstract:
“Prior work has shown that intelligent user interfaces (IUIs) that use modalities such as speech, gesture, and writing pose challenges for children due to their developing cognitive and motor skills. Research has focused on improving recognition and accuracy by accommodating children’s specific interaction behaviors. Understanding children’s expectations of IUIs is also important to decrease the impact of recognition errors that occur. To understand children’s conceptual model of IUIs, we completed four consecutive participatory design sessions on designing IUIs with an emphasis on error detection and correction. We found that, while children think of interactive systems in terms of both user input and behavior and system output and behavior, they also propose ideas that require advanced system intelligence, e.g., context and conversation. Our work contributes new understanding of how children conceptualize IUIs and new methods for error detection and correction, and will inform the design of future IUIs for children to improve their experience.
Interested readers can find the camera-ready version of the paper (preprint) available here. See you in Montreal!Read More
In our previous post on this project, we discussed getting design input from children for designing intelligent interfaces such as speech, gesture, and touch. We are collaborating with Jason Yip from University of Washington on this project. Jason is the director of KidsTeam UW, where he is co-designing new technologies with children and families.
We had trouble recruiting children at University of Florida last year due to our initial study design of requiring a six week commitment. Recruitment became more difficult because UF does not have an already established co-design program. Due to the trouble recruiting at UF we decided to run four co-design sessions with KidsTeam UW in Seattle, Washington in a two-week period. Since we decided to run the study with KidsTeam UW, we did not have to run the study for six weeks because the children at KidsTeam have already had experience with the different design techniques and collaborating with one another. Thus, we did not have to plan any sessions to teach the children the different design techniques.
Therefore, I traveled to Seattle for a couple of weeks in early March to help run the sessions with Jason Yip and his KidsTeam UW project team. We had a total of seven children (ages 7 – 12) from KidsTeam UW participate in the co-design sessions. The co-design sessions went great! The kids had a lot of fun designing during the sessions with different Participatory Design techniques. The techniques we used were Bags of Stuff, Big Paper, Likes Dislikes and Design Ideas, and Scenario-Based Design [1,2]. For Likes Dislikes and Design Ideas, the kids had a blast playing with speech interfaces like the Amazon Echo, Apple Siri, Microsoft Cortana, and Google Assistant. We even had a kid accidentally order Pokémon cards on the Amazon Echo! Now that we have completed the co-design sessions, we are going to start looking through the data and qualitatively coding the videos for themes that will help us understand how to design intelligent interfaces for kids.
This was a great experience for me; it was a lot of fun designing with the children! Spending time with KidsTeam UW also gave me more experience in running co-design studies and qualitative analysis which is a valuable experience since I am graduating this summer and going to pursue my Ph.D. in Human Centered Computing. I am looking forward to going through the data and seeing what we find.
1. Greg Walsh, Elizabeth Foss, Jason Yip, and Allison Druin. 2013. FACIT PD: a framework for analysis and creation of intergenerational techniques for participatory design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 2893-2902.
2. Greg Walsh, Alison Druin, Mona Leigh Guha, Elizabeth Foss, Evan Golub, Leshell Hatley, Elizabeth Bonsignore, and Sonia Franckel. 2010. Layered elaboration: a new technique for co-design with children. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10). ACM, New York, NY, USA, 1237-1240.Read More
In a previous post on the MTAGIC project, we presented results of a study that found that interface complexity (simple, abstract interface vs. complex interface) affected children’s performance of some touch interactions and did not affect gesture interactions on smartphone devices. Recently, we have been extending this project to identify the differences in how children perform target touching and gesture drawing tasks when interacting using different input devices. We so far have conducted a user study and have been analyzing and documenting our findings. We are particularly interested in how similar or different our results were to the previous results from the MTAGIC project. We are currently in the process of writing a paper that provides detailed results, and discussions of our findings.
This project started as a class project for a research methods class. Working on this project has been an informative, challenging, and a good learning experience. This project helped broaden my understanding on the topics taught in the class such as descriptive statistics, and experimental designs. Furthermore, it has helped enhance my skills in performing statistical analysis on data from different types of experimental design. A major challenge I faced while analyzing the data was trying to figure out the right anova analysis syntax in R for mixed, repeated-measure experimental design; which was our experimental design, and this project has enabled me to challenge myself in learning to understand and overcome problems. I plan to apply the knowledge gained when performing statistical analysis on data in future research projects.Read More
We are currently extending our previous research [1,2,3] on children’s touch and gesture interaction patterns to interactive tabletop computers as well as looking at the collaboration between children on the multi-touch tabletop. We are looking at how to scaffold positive collaboration on the tabletop with children ages 5 to 10.
To research collaboration we created I Spy Games for the children to play using CSS and Creative Markup Language (CML) which is an XML-based open standard that is used for defining interactions within a multi-user, multi-touch environment such as the interactive tabletop. We are currently recruiting and running studies!
I am currently leading this project, and it has been an immense learning experience on time and project management! Recruiting and running studies has been a challenge due to needing two children, in a similar age group, to participate at the same time for collaboration. However, this has led us to look at alternate ways of recruiting which will benefit us in other studies.
Stay tuned for our results!
1. Woodward, J., Shaw, A., Luc, A., Craig, B., Das, J., Hall Jr, P., Holla, A., Irwin, G., Sikich, D., Brown, Q., Anthony, L. 2016. Characterizing How Interface Complexity Affects Children’s Touchscreen Interactions. ACM Conference on Human Factors in Computing (CHI’2016), San Jose, CA, 7 May 2016, p.1921-1933. [Pdf]
2. Shaw, A. and Anthony, L. 2016. Toward a Systematic Understanding of Children’s Touchscreen Gestures. Extended Abstracts of the ACM Conference on Human Factors in Computing Systems (CHI’2016) , San Jose, CA, 7 May 2016, p.1752-1759. [Pdf and Poster]
3. Anthony, L., Brown, Q., Nias, J., Tate, B., and Mohan, S. 2012. Interaction and Recognition Challenges in Interpreting Children’s Touch and Gesture Input on Mobile Devices. Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS’2012), Cambridge, MA, 14 Nov 2012, p.225-234. [Pdf]Read More
As seen in our previous research [1, 2, 3], recognition of children’s gesture input is not as accurate as it is for adults, and children have more difficulty with touch interactions. These findings show that intelligent user interfaces such as touch, gesture, and speech can pose challenges for children because the system is not always able to understand what the children meant to do. We are exploring the idea of getting design input from children for designing intelligent interfaces to help overcome these challenges.
We will be getting direct input, ideas, and designs from children by using Cooperative Inquiry . Cooperative Inquiry (or co-design) is a framework of Participatory Design [4, 6] – a method in which the users are a part of the design process – specially intended for design of children’s technology with children as partners. Cooperative Inquiry consists of adults and children working together as design partners on technology design. Cooperative Inquiry was defined by Allison Druin , and she has pioneered co-designing with children with KidsTeam at the University of Maryland. We will use different Participatory Design techniques such as low tech prototyping with Bags of Stuff and Layered Elaboration [6, 7]. Our current plan is that the children will take part in the co-design sessions over a six week period, and we have been creating and iterating over our plans for design activities for each session.
Our overall goal is to design technology that is tailored towards children to optimize their interactions, and to gain knowledge on how children conceptualize and interact with intelligent interfaces.
This project has been a learning experience for me because it is the first project I have designed.
It has been a long process from taking the idea of co-designing intelligent interfaces with children to developing a full plan for each session. Recruiting for the study has also been a challenge due to it requiring a six week commitment, but it has helped me to better understand the entire process of running a project. I am excited to run this project and designing the project has made me want to pursue research and a graduate degree.
1. Julia Woodward, Alex Shaw, Annie Luc, Brittany Craig, Juthika Das, Phillip Hall, Jr., Akshay Holla, Germaine Irwin, Danielle Sikich, Quincy Brown, and Lisa Anthony. 2016. Characterizing How Interface Complexity Affects Children’s Touchscreen Interactions. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1921-1933.
2. Alex Shaw and Lisa Anthony. 2016. Toward a Systematic Understanding of Children’s Touchscreen Gestures. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). ACM, New York, NY, USA, 1752-1759.
3. Lisa Anthony, Quincy Brown, Jaye Nias, Berthel Tate, and Shreya Mohan. 2012. Interaction and recognition challenges in interpreting children’s touch and gesture input on mobile devices. InProceedings of the 2012 ACM international conference on Interactive tabletops and surfaces (ITS ’12). ACM, New York, NY, USA, 225-234.
4. Greenbaum, J. (1993). A design of one’s own: Toward participatory design in the United States. D. Schuler, & A. Namioka (Eds.), Participatory design: Principles and practices (pp. 27-37). Hillsdale, NJ: Lawrence Erlbaum.
5. Allison Druin. 1999. Cooperative inquiry: developing new technologies for children with children. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI ’99). ACM, New York, NY, USA, 592-599.
6. Greg Walsh, Elizabeth Foss, Jason Yip, and Allison Druin. 2013. FACIT PD: a framework for analysis and creation of intergenerational techniques for participatory design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 2893-2902.
7. Greg Walsh, Alison Druin, Mona Leigh Guha, Elizabeth Foss, Evan Golub, Leshell Hatley, Elizabeth Bonsignore, and Sonia Franckel. 2010. Layered elaboration: a new technique for co-design with children. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10). ACM, New York, NY, USA, 1237-1240.
8. Druin, A. (2002). The Role of Children in the Design of New Technology. Behaviour and Information Technology, 21(1), pp. 1–25.Read More
In the last post, we mentioned that our paper on the MTAGIC study – “Characterizing How Interface Complexity Affects Children’s Touchscreen Interactions” – was accepted to CHI 2016, a top conference for Human Computer Interaction! The paper focused on whether interface complexity had an effect on touch and gesture interactions for children and adults. We found that interface complexity affected some touch interactions primarily related to visual salience, such as response time and the amount of holdovers, and that it did not affect gesture recognition. Our design recommendations are:
1) Provide salient visual feedback of accepted input to prevent holdovers.
2) Avoid small targets at screen edge, especially for complex interfaces.
3) Consider trade-off between visual saliency and response time.
4) Train gesture recognizers for younger children with more examples.
Our future work includes running the study on a larger touchscreen display, performing a more in depth analysis of children’s gestures to support better recognition, and getting direct input from children in designing intelligent interfaces. Here is a link to our paper, our CHI Video Preview, and our presentation slides at CHI!Read More
In the last post, we had submitted our paper on the MTAGIC study findings and were waiting to find out if it was accepted. Our paper, “Characterizing How Interface Complexity Affects Children’s Touchscreen Interactions”, was accepted to CHI 2016, a top conference for Human Computer Interaction! The paper focused on whether interface complexity had an effect on touch and gesture interactions. Here is the abstract:
Most touchscreen devices are not designed specifically with children in mind, and their interfaces often do not optimize interaction for children. Prior work on children and touchscreen interaction has found important patterns, but has only focused on simplified, isolated interactions, whereas most interfaces are more visually complex. We examine how interface complexity might impact children’s touchscreen interactions. We collected touch and gesture data from 30 adults and 30 children (ages 5 to 10) to look for similarities, differences, and effects of interface complexity. Interface complexity affected some touch interactions, primarily related to visual salience, and it did not affect gesture recognition. We also report general differences between children and adults. We provide design recommendations that support the design of touchscreen interfaces specifically tailored towards children of this age.
You can see the camera-ready version of the paper here and also the CHI Video Preview for our paper here. The conference will be held in San Jose, California! Alex Shaw and I will be presenting the paper at the conference! We will post the talk slides when available.Read More