Author: Lisa Anthony

The INIT Lab is happy to announce that PhD student (and full-time UF CISE lecturer) Jeremiah Blanchard‘s work has been accepted for publication at the upcoming VL/HCC conference: the IEEE Symposium on Visual Languages & Human-Centric Computing. The conference will be held in Memphis, TN, in October. The paper presents findings on how middle school students learning computer programming in hybrid blocks-plus-text environments perceive their experience–does learning in hybrid environments help alleviate problems of perceptions in inauthenticity while still making programming accessible to novice learners?

Here is the abstract:

Text languages are perceived by many computer science students as difficult, intimidating, and/or tedious in nature. Conversely, blocks-based environments are perceived as approachable, but many students see them as inauthentic. Bidirectional hybrid environments provide textual and blocks-based representations of the same code, thereby offering students the opportunity to seamlessly transition between representations to build a conceptual bridge between blocks and text. However, it is not known how use of hybrid environments impacts perceptions of programming. To investigate, we conducted a study in a public middle school with six classes (n=129). We found that students who used hybrid environments perceived text more positively than
those who moved directly from blocks to text. The results of this research suggest that hybrid programming environments can help to transition students from blocks to text-based programming while minimizing negative perceptions of programming

The camera-ready preprint of the paper is available here. If you’ll be at VL/HCC, come meet Jeremiah and see his presentation about our work!

PS Jeremiah also successfully proposed his dissertation successfully at the end of May!


Read More

At the recent Interaction Design & Children (IDC) 2019 conference, INIT Lab Director Dr. Lisa Anthony gave a crash course in quantitative research methods, and how to apply and adapt them to child-computer interaction. Topics covered include: the types of research questions that can be answered with quantitative methods, experiment design, data logging, data analysis, and simple statistical techniques, as well as important considerations for conductive quantitative work with young children, especially attentional issues that may affect data quality. The course notes and sample data used during the course are now available for download! If you find the course materials useful, feel free to let us know!

Read More

The INIT Lab has its first paper on usable privacy and security! In a collaboration with UF FICS (Florida Institute for Cybersecurity Research) faculty member Dr. Patrick Traynor, INIT Lab director Dr. Lisa Anthony contributed to a paper investigating the reasons that security measures at pay-at-the-pump gas station terminals fail. Lisa helped with analysis and reporting on a large real-world dataset that included four years of real-world skimmer reports at gas stations around Florida. The paper is titled, “Kiss from a Rogue: Evaluating Detectability of Pay-at-the-Pump Card Skimmers,” and here is the abstract:

Credit and debit cards enable financial transactions at unattended “pay-at-the-pump” gas station terminals across North America. Attackers discreetly open these pumps and install skimmers, which copy sensitive card data. While EMV (“chip-and-PIN”) has made substantial inroads in traditional retailers, such systems have virtually no deployment at pay-at-the-pump terminals due to dramatically higher costs and logistical/regulatory constraints, leaving consumers vulnerable in these contexts. In an effort to improve security, station owners have deployed security indicators such as low-cost tamper-evident seals, and technologists have developed skimmer detection apps for mobile phones. Not only do these solutions put the onus on consumers to notice and react to security concerns at the pump, but the efficacy of these solutions has not been measured. In this paper, we evaluate the indicators available to consumers to detect skimmers. We perform a comprehensive teardown of all known skimmer detection apps for iOS and Android devices, and then conduct a forensic analysis of real-world gas pump skimmer hardware recovered by multiple law enforcement agencies. Finally, we analyze anti-skimmer mechanisms deployed by pump owners/operators, and augment this investigation with an analysis of skimmer reports and accompanying security measures collected by the Florida Department of Agriculture and Consumer Services over four years, making this the most comprehensive long-term study of such devices. Our results show that common gas pump security indicators are not only ineffective at empowering consumers to detect tampering, but may be providing a false sense of security. Accordingly, stronger, reliable, inexpensive measures must be developed to protect consumers and merchants from fraud.

Dr. Traynor’s PhD student Nolen Scaife led the work and recently presented it at the IEEE 2019 Symposium on Security and Privacy (aka, “Oakland”). Nolen has just graduated from UF CISE and will be joining CU Boulder as a faculty member in the fall. Download the camera-ready version of our paper here.

Read More

For five years, the INIT lab (and our past and present collaborators!) was engaged in an NSF-funded research project to study physical dimensions of children’s touchscreen interaction use, e.g., what happens when they try to acquire onscreen targets or make onscreen gestures. The project, called “Mobile Touch and Gesture Interaction for Children,” or “MTAGIC” (magic) for short, ended in August 2017. Recently, as PI of the project, I have published a retrospective article that synthesizes our findings across the six studies we ran for this project and identified elements that were consistent or varied across contexts, and the article is now available online at the International Journal of Human-Computer Studies (IJHCS). The full article title is “Physical Dimensions of Children’s Touchscreen Interactions: Lessons from Five Years of Study on the MTAGIC Project,” and one part I am particularly keen to see addressed in future work extending our work are three open areas of research that I identify in Section 5: (1) children’s interaction with emerging technologies like bendable displays and spherical displays; (2) support for children with disabilities; and (3) children’s interactions in multiple simultaneous modalities like speech and gesture together. Here is the abstract of the paper:

Touchscreen interaction is nearly ubiquitous in today’s computing environments. Children have always been a special population of users for new interaction technology: significantly different from adults in their needs, expectations, and abilities, but rarely tailored to in new contexts and on new platforms. Studies of children’s touchscreen interaction have been conducted that focus on individual variables that may affect the interaction, but as yet no synthesis of studies replicating similar methodologies in different contexts has been presented. This paper reports the results across five years of focused study in one project aiming to characterize the differences between children’s and adults’ physical touchscreen interaction behaviors. Six studies were conducted with over 180 people (116 children) to understand how children touch targets and make onscreen gestures. A set of design recommendations that summarizes the findings across the six studies is presented for reference. This paper makes the entire set available for reference in one place and highlights where the findings are generalizable across platforms. These recommendations can inform the design of future touchscreen interfaces for children based on their physical capabilities. Also, this paper outlines the future challenges and open questions that remain for understanding child-computer interaction on touchscreens.

Download the preprint here, or check out the journal’s definitive version. For those interested in this space, the cumulative set of 24 design recommendations from the five years of the MTAGIC project are available for download here.

Read More

As part of the ACM SIGCHI 2018 conference, INIT Lab director Lisa Anthony helped co-organize a ‘special interest group’ (SIG) session on child-computer interaction. This SIG is organized by some of the child-computer interaction research community every year. This year, the topic was “Ubiquity and Big Data“: how do we design technology for children in an era of “big data” in which their online activities from an extremely early age may be being monitored, archived, evaluated, and judged? The issue is complex, since parents, schools, and other stakeholders may find beneficial reasons for monitoring and tracking their children’s activities, especially in cases of bullying, self-harm, or risky behaviors; but what are the last impacts of such technologies when the children grow up and already have a digital footprint not of their own making? How do we empower children to own their own online identities but still provide a safe space for growth and learning?

As a result of this SIG, many of the attendees of the event decided to write up a summary of the topics of discussion and submit it to the ACM interactions magazine. It has just recently appeared in the November-December 2018 issue, in the magazine’s forum on “Universal Interactions”. Check out the full article here (available in PDF or HTML format). The article presents the topics of discussion and some insights the SIG attendees came up with, especially the fact that education and transparency are critical values to keep in mind when pushing forward into this space. It is our hope that the article will launch further discussion and awareness of these topics among researchers, educators, designers, and parents.

Read More

Recently, we posted about a paper that Lisa co-authored with long-time collaborators, Radu-Daniel Vatavu and Jacob O. Wobbrock, that appeared at MobileHCI’2018. The paper presented some optimizations for our well-known $P gesture recognition algorithm to make it feasible to run on low-resource devices. The new algorithm is called $Q. For more on the paper, see our project page and online demo here, or the camera-ready version of the paper here.

In the meantime, Mobile’HCI was held in Barcelona, Spain, in September. During the conference, we discovered that our paper received an Honorable Mention for Best Paper award! Check it out on the ACM Digital Library here.

Congratulations to both of our co-authors for a great paper and a great acknowledgment by the community!

Read More

We are proud to be able to say that our lab has had a paper accepted to the upcoming ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW) 2018 conference! This paper presents an analysis of children interacting around a large touchscreen tabletop display, in particular examining some previously proposed design recommendations for how to support effective collaboration in this context. INIT Lab former student Julia Woodward conducted this study with her co-authors while she was an undergraduate research assistant in our lab. She is now a Human-Centered Computing (HCC) PhD student working with Dr. Jaime Ruiz in the Ruiz HCI Lab.

Here is the abstract:

Prior work has shown that children exhibit negative collaborative behaviors, such as blocking others’ access to objects, when collaborating on interactive tabletop computers. We implemented previous design recommendations, namely separate physical territories and activity roles, which had been recommended to decrease these negative collaborative behaviors. We developed a multi-touch “I-Spy” picture searching application with separate territory partitions and activity roles. We conducted a deep qualitative analysis of how six pairs of children, ages 6 to 10, interacted with the application. Our analysis revealed that the collaboration styles differed for each pair, both in regards to the interaction with the task and with each other. Several pairs exhibited negative physical and verbal collaborative behaviors, such as nudging each other out of the way. Based on our analysis, we suggest that it is important for a collaborative task to offer equal opportunities for interaction, but it may not be necessary to strive for complete equity of collaboration. We examine the applicability of prior design guidelines and suggest open questions for future research to inform the design of tabletop applications to support collaboration for children.

You can download the camera-ready PDF here. Julia will be presenting the paper in November in Jersey City, NJ.

Read More

And now for something a little different! The INIT Lab has long been conducting research on how children’s physical capabilities (e.g., motor skills development) affects their interactions with touchscreen devices like iPads and smartphones. Other researchers, like Alexis Hiniker and her former advisor Julie A. Kientz, both at the University of Washington in the DUB group, have been examining how children’s cognitive development impacts those interactions. We teamed up to write a magazine article for the UXPA Magazine (User Experience Professionals Association) to help get our research findings in the hands of practitioners! We are excited to announce that the article is now live on the UXPA site.

The abstract (first paragraph) is here:

Practicing designers can tell you that designing mobile touchscreen apps for children is different than for adult users. But what does science tell us about what interface differences are critical to remember? We are engaged in the science of child-computer interaction. Our empirical research has focused on capturing how the cognitive and physical traits of young children under age 10 affect the success of their interactions with touchscreen interfaces. We, and others, have produced research-driven design recommendations to consider. We share here our top seven guidelines for designing for children under age 10 and the evidence that led to them.

The article link can be found here. We are also releasing a two-page supplemental bibliography with this article as a separate download. The UXPA article format is more magazine-style and does not allow references, but we wanted to make sure that people could find the exciting research that we synthesized for this article if they were interested. Check it out here. If you’re a practitioner and you’ve found this article useful in your work to design technology for kids, we’d love to hear from you! Contact me.

Read More

We are excited to announced that there is a new member of the $-family of gesture recognizers! A paper on a new super-quick recognizer optimized for today’s low-resource devices (e.g., wearable, embedded, and mobile devices) that I (Lisa) co-wrote with my long-time collaborators, Radu-Daniel Vatavu and Jacob O. Wobbrock, will appear at the upcoming International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’2018) conference. The paper extends the current best-performing, most robust member of the $-family, $P, using some clever code optimizations to short-cut much of the computation $P undertakes, and makes this recognizer, which we call $Q, blazingly fast and able to work in real-time on low-power devices. Here is the abstract:

We introduce $Q, a super-quick, articulation-invariant point-cloud stroke-gesture recognizer for mobile, wearable, and embedded devices with low computing resources. $Q ran up to 142× faster than its predecessor $P in our benchmark evaluations on several mobile CPUs, and executed in less than 3% of $P’s computations without any accuracy loss. In our most extreme evaluation demanding over 99% user-independent recognition accuracy, $P required 9.4s to run a single classification, while $Q completed in just 191ms (a 49× speed-up) on a Cortex-A7, one of the most widespread CPUs on the mobile market. $Q was even faster on a low-end 600-MHz processor, on which it executed in only 0.7% of $P’s computations (a 142× speed-up), reducing classification time from two minutes to less than one second. $Q is the next major step for the “$-family” of gesture recognizers: articulation-invariant, extremely fast, accurate, and implementable on top of $P with just 30 extra lines of code.

Radu will be presenting this work in the fall in Barcelona. Check out the camera-ready version of our paper here.

Read More

I am pleased to be able to say that I was recently honored with the UF Herbert Wertheim College of Engineering Faculty Advising/Mentor of the Year Award for 2017-2018. This award focuses on undergraduate research and mentoring, an activity which I prioritize heavily in my research lab and other activities as a professor at UF. As a former undergraduate research student myself, I know the power of getting involved in research early. Before that opportunity came along, I really didn’t know what research was, or what career paths were available in this direction. After getting a taste of cutting-edge computer science research, I knew I wanted to remain part of the forward-thinking group of scientists that were helping push technology ahead. In my research lab, I have worked with many undergraduates, most of whom stay for multiple semesters and eventually lead their own research projects. For me, the best part of this award was getting to read the letters that students wrote to describe how they felt being involved in my lab and how the mentorship I provided helped them in their careers. Working with students, showing them the opportunities in research, and training the next generation of scientists is what this is all about, to me. Here’s a photo of the president of the University of Florida, Dr. Fuchs, presenting the award to me at the College awards ceremony, taken by a University photographer. Thank you for the honor!

Read More