Category: Uncategorized

Check out our remote video presentation for our CHI’2020 paper on the TIDESS website!

Read More

Check out our recent blog post on the TIDESS website!

Read More

Our paper presenting what we can learn from children and PE teachers in the formative design of exergames, titled “Toward Exploratory Design with Stakeholders for Understanding Exergame Design” was accepted as a Late-Breaking Work in CHI 2020: the SIGCHI Conference on Human Factors in Computing Systems.  In a previous post from a few years agowe mentioned that we were analyzing themes that emerged from children’s and PE teachers’ focus group sessions we had run to understand what we could learn from them to aid in the design of exergames. In addition to the focus group sessions, we conducted an in-depth qualitative interview with one PE teacher. The goal of the interview was to elicit design ideas regarding how exergames can be used to motivate children and induce exertion at targeted intensity levels. We compared the themes from the children’s focus group sessions and the PE teacher’s interview to identify overlaps and non-overlaps in their perspectives. In this CHI ’20 Late-Breaking Work, we detail the focus group sessions with children and the interview session with the PE teacher, the overlaps and non-overlaps between children’s and the PE teacher’s perspectives, and the implications of our findings on improving the efficacy of exergames for children. Here is the abstract:

Prior work has explored improving the efficacy of exergames through participatory design with children. Children are not necessarily able to make informed decisions about their fitness, so their perspectives form only half the picture. Adults who are invested in the problem of children’s fitness (e.g., PE teachers) are a valuable missing perspective. As a first step to understanding what we can learn from these stakeholders to aid the design of exergames, we conducted one in-depth interview with a PE teacher and several focus groups with children. Our findings showed that, although both children and the PE teacher like similar game elements, children viewed the elements through the lens of fun while the PE teacher viewed the elements through the lens of effectiveness. Our preliminary findings establish the importance of including such stakeholders in the formative design of exergames.

Interested readers can find the camera-ready version (preprint) here. Initially, I planned to present this paper as a poster in the upcoming CHI 2020 conference taking place in Honolulu, Hawaii from April 25 – April 30. However, due to the Coronavirus pandemic, the conference was canceled. As an alternative, I will include a link to the poster around the time of the conference. Please feel free to contact me (aoaloba@ufl.edu) if you have any questions, feedback, or comments on the poster and the paper.

I am a 5th year PhD Student in the INIT lab majoring in Human-Centered Computing (HCC). The FunFitTech project was instrumental in my development as a PhD student. Through this project, I gained experience on how to design focus group and interview questions that can elicit effective responses from participants. I also learned the process of analyzing open-ended responses from participants.

Read More

As I was not able to present my paper (Dual-Modality Instruction and Learning: A Case Study in CS1) at SIGCSE 2020 due to travel restrictions and social distancing directives, I prepared a video presentation which can be watched at the link:

https://youtu.be/xns1I0Wymm4

I’ll continue to follow up on this work and hope to see everyone at SIGCSE 2021!

Read More

As a member of the INIT lab, I’m writing today’s blog post about the qualitative coding of research data. Depending on the type of data, research data can be analyzed either quantitatively or qualitatively. Researchers use quantitative approaches (e.g., means, standard deviations, statistical tests) to quantify certain patterns in numeric data. On the other hand, researchers apply qualitative approaches when they want to extract emerging themes that can help inform users’ motivations and mental models from text-based data (e.g., open-ended survey questions or interview responses). A common approach for analysis of qualitative data is called “coding”. In this blog post, I will discuss the steps for applying qualitative coding to open-ended survey questions.

Qualitative coding is used to determine the categories, relationships, and assumptions that inform the respondents’ view of the world in general, and of the topic in particular [1]. A code is what a researcher uses to code qualitatively; it is a word or phrase that describes a participant’s response. The ultimate aim for qualitative coding is to identify themes and patterns that emerge from gathered data that can then be used to enable further interpretations [2]. One of the most important aspects of using this method is developing and designing a codebook. A good codebook includes all the themes the researchers are qualitatively coding, their definitions, and an example sentence from the data that can be characterized using this theme. For example, a code/theme can be a simple word or phrase such as “ease of use” with a definition “response expresses the idea that the task is easy to do,” and an example that this code would apply to would be: “This method was really easy for me to use; it took me no effort.” Designing a codebook involves several steps. The first step is to identify the dimensions, which are the encompassing themes that capture the ideas expressed in the data. For example, each open-ended question can be used to define a codebook dimension. The next step is to organize the responses pertaining to each dimension. Organizing these responses is an important step in qualitative coding because it makes one’s data set easier to access and understand.

Once the data has been organized, the next step is to develop the codes for each dimension. As I mentioned above, a code is a word or short phrase that captures the main concept or theme of a certain data point. There are many different procedures one can take in the code creation process. For instance, in our lab, we often have each researcher (coder) independently look through 10% of the data and then come together to agree on the final set of codes. Our codebooks always include corresponding definitions and examples to help us understand when the code should be applied. After the codes for all dimensions have been added to the codebook, our lab usually then has each coder independently code a different 10% of participants’ responses across all dimensions and the inter-rater reliability (IRR) will be calculated for this subset of the data. This rate is calculated to see how closely connected the researchers coded the same material. Once an acceptable IRR is achieved, the last step of the coding process is for each coder to independently code a subset of the data.

Once the codebook is refined and clearly understood by the coders, they should be able to apply the codes to the data easily, even if the responses are more complex. For example, imagine we have a codebook with two codes “ease of use” and “uncomfortable”, in which ease of use is defined as “the task is easy to do” and uncomfortable is defined as “the task makes the participant feel uncomfortable to perform.” Therefore, if a participant’s response was “This task took no effort, but it made me feel awkward and uncomfortable,” the first half would be categorized as “ease of use,” and the latter half would be coded “uncomfortable.” Therefore, applying codes to data can be quick and easy if the coders’ codebook is comprehensive and clear. After the coding process, the researchers will discuss common or frequent themes in the data, and try to understand the big picture of what the qualitative analysis is saying about the original research questions.

Participating in a recent lab research project has exposed me to the field of human-computer interaction. This new field has become a topic of interest for me because I find the research intriguing. Moreover, being a third-year Neuroscience, I have found many relationships between cognitive abilities and individuals’ responses. For example, in my Cognitive Psychology class, Professor Brian Cahill made it clear in his lecture that according to cognitive psychologists, many individuals do not fully understand why they make decisions; they usually base their decisions oms on instinctual tendencies or past experiences. In our study, we are asking participants to explain the reasoning behind some of their interaction choices. I think this facet of cognitive science could be why many of the participants in the study responded with statements like: “It felt natural”. This phenomenon has contributed greatly to my interest in the field. This research lab has allowed me to learn new skills such as qualitative coding, advanced statistics, and computer programs such as Excel. Overall, this experience so far has been rewarding and exciting!

 

References

[1] G. McCraken, The Long Interview. Newbury Park, California, 1988. [Online] Available: https://books.google.com/books?hl=en&lr=&id=3N01cl2gtoMC&oi=fnd&pg=PA5&dq=McCracken,+G.+(1988).+The+Long+Interview+(Sage+University+Paper+Series+on+Qualitative+Research+Methods,+No.+13).+Newbury+Park,+Calif.:+Sage.&ots=RBzNdslYZu&sig=-QIil8Az3wKfMJ5c-oRqZNckWGQ#v=onepage&q&f=false

[2] T. N. Basit, “Manual or Electronic? The Role of Coding in Qualitative Data Analysis,” Routledge Taylor & Francis Group, vol. 45, no. 2, pp. 143-154, Aug. 2003, doi: 10.1080/0013188032000133548

 

 

Read More

In a previous post, I mentioned that we were working on selecting a representative set of motions from our Kinder-Gator dataset [1] to help us test recognition of whole-body motions. We have since designed a toolkit to visualize this representative set of motions while we explored different representations and how they affected recognition accuracy. We designed the toolkit in C# using Windows Presentation Foundation (WPF), which is a UI framework that uses the Extensible Application Markup Language (XAML) to create desktop client applications [2]. In this blog post, I describe the main components of the toolkit and challenges I faced in the design of the toolkit:

Designing the Interface: The Kinder-Gator dataset [1] includes examples of children and adults performing actions while forward facing the Kinect. Within this dataset, 11 of the actions comprise the representative set of actions, selected using the methods discussed in the previous post. Since the toolkit interface is the window that users can interact with, it is important that the interface includes controls that can enable users to select their preferences. These controls will depend on the information that the Kinder-Gator dataset format specifies, which are the population (child vs. adult) and the actions (i.e., representative set of actions). Consequently, I included the following controls to the interface (Figure 1 shows the full interface of our toolkit):

  1.  A button that opens a file browser for users to upload the Kinder-Gator dataset (or other datasets with similar formats), and a text box that indicates whether the dataset was loaded successfully.
  2.  Radio buttons for each level of the population in the Kinder-Gator dataset (i.e., child and adult) so that users can visualize actions from a specific population. Kinder-Gator includes the motions of both children and adults, and each file is represented in the format “POSE-PID-ACTIONNAME-TIMESTAMP.CSV”. Each participant in the dataset has a unique participant ID (PID). A file belongs to a child or an adult category if the PID belongs to a child or adult participant respectively (the mapping between population and PIDs are included in the Kinder-Gator dataset README file). Once a radio button (i.e., population) is selected, only files with PIDs corresponding to the selected population are considered for visualization.
  3. Checkboxes for actions in the Kinder-Gator dataset, through which users can select one or more actions to visualize. Checkboxes are added to the interface dynamically, based on the elements in a gesture array. The gesture array is composed of a hard-coded set of gesture names, representing the space of gestures that can be visualized. By default, the list of actions shown on the interface corresponds to the representative set of actions we chose from the Kinder-Gator dataset. That is, the gesture array is pre-populated with the names of all the actions in the representative set. The gesture array can be modified to ac commodate a different set of actions.
  4. Other controls in the interface include tab controls for the supported visualization methods: (averageFrame and Per Joint), which we will discuss next, and buttons to enable the visualization process (“Submit”) and clear user options (“Refresh”).

 

Figure 1: Toolkit Interface.

Visualizing Kinect Motions: The Microsoft Kinect tracks the movement of 20 joints in 3-dimensional space over time, so an action in the dataset includes the paths travelled by 20 joints in 3D space. Therefore, actions in the Kinder-Gator dataset [1] can be visualized by showing the motion paths of each joint over the duration of the motion (Per Joint method). To visualize the motion paths, I included checkboxes to represent each of the joints, which enables users to select one or more joints to be visualized for an action-category pair. Figure 2a shows the interface when the right wrist joint (“WristRight”) is selected for the action “Raise your hand” and the Age group “Adult”. Figure 2b shows the corresponding visualization (i.e., the motion paths of the “WristRight” joint across all “Adult” participants for the action “Raise your hand”).

Figure 2a: Toolkit Interface after the Per Joint method has been selected.

Figure 2b: Per Joint Visualization for the “Raise your Hand Motion” for all ten adults in the Kinder-Gator dataset when the right-hand wrist joint is selected. Each visualization is represented in 2D along the X-Y dimension.

In addition, an action comprises a set of poses. A pose shows the spatial arrangement of the joints (i.e., the human skeleton) at a given point in time. Therefore, I included an alternate type of visualization (averageFrame) that depicts the action in terms of the average pose over the entire duration of the action (Figure 3). The visualizations are shown in a new window after the “submit” button has been clicked.

Figure 3: AverageFrame Visualization for the “Raise your Hand Motion” for all ten adults in the Kinder-Gator dataset. Each visualization is represented in 2D along the X-Y dimension.

Challenges
As a novice in the design of WPF applications, the first challenge I faced was setting up checkboxes to match each motion type in the dataset when designing the interface. My research found that I could create a control to hold a list (itemControl) when designing the interface using XAML. Then, once the user has selected the motion type, I created a list of checkbox controls in the C# code that connected to the XAML design and bound the checkboxes to the item control. Another challenge I faced was making sure that the visualizations shown accurately represent the preferences selected by the user from the interface window. This challenge resulted from the different options that users could select from (action, population, visualization, and joints), which meant that user preferences will span multiple permutations. To solve this challenge, I had to make sure that I use efficient data structures to keep track of users’ preferences throughout their interaction with the interface.

I am a fifth year Ph.D. student working in the INIT lab. By working on this toolkit, I have been able to use it to study the visualizations of different actions to gain a deeper understanding of the joint motion paths. Specifically, I have been able to understand how the motion paths of joints in an action differ based on joint movements, the action being performed, and the participant performing the action. For example, since the visualizations show the paths of a joint across all participants within a category (Figure 3), I can study these paths to identify what variations exist in how different users move that joint when performing similar actions. My next steps will focus on using this toolkit to visualize the joint paths of different motions and studying these paths to identify features that can characterize joint movements. Stay tuned for a future post in which we will release the toolkit for other researchers to use!

REFERENCES

  1. Aishat Aloba, Gianne Flores, Julia Woodward, Alex Shaw, Amanda Castonguay, Isabella Cuba, Yuzhu Dong, Eakta Jain, and Lisa Anthony. 2018. Kinder-Gator: The UF kinect database of child and adult motion. In EG 2018 – Short Papers., 13–16. https://doi.org/http://dx.doi.org/10.2312/egs.20181033.
  2. Getting started with WPF. https://docs.microsoft.com/en-us/visualstudio/designers/getting-started-with-wpf?view=vs-2019
Read More

Both formal and informal educational venues such as classrooms and public science centers are increasingly using touchscreen interfaces for differing sizes and form factors such tablets, multi-touch flatscreen tabletops, and interactive spherical displays for learning purposes [1,2]. With this shift towards more direct-interaction-based learning comes new research opportunities for designing touch-based gestural interactions that are natural and intuitive to use for learners of all ages (i.e., both adults and children). As a graduate research assistant in the INIT lab, I have led two research projects that lie at the intersection of human-computer interaction (HCI) and learning sciences (LS) research: the TIDRC project and the TIDESS project. The overarching goal of the TIDRC project is to explore the extent to which research-based interaction design guidelines for children’s touch-based interfaces are employed in practice by app developers who build the educational apps children use regularly. In the TIDESS project, our main aim is to understand how adults and children interact with large touchscreen interfaces such as multi-touch tabletops and spherical displays in order to build effective educational technology that adapts to users’ natural interactions.

In summer 2019, we wrote a workshop paper, titled: “HCI Methodologies for Designing Natural User Interactions that Do Not Interfere with Learning” that reports what we have learned on both projects regarding the type of HCI methodologies that can be adopted to design effective educational technology. The paper was written for the Making the Learning Sciences Count: Impacting Association for Computing Machinery Communities in Human-Computer Interaction workshop. I traveled to the workshop to present our paper and participate in the discussion, in Lyon, France at the International Conference of Computer-Supported Collaborative Learning (CSCL’19). The abstract of the paper is as follows:

“As emerging touchscreen technologies continue to become more prevalent in learning environments such as science museums and schools, there is a need to understand both principles of interaction design and of learning sciences to create effective educational technology. In this paper, we describe how the HCI approaches we employ in our work can be used to design more effective learning experiences, specifically for interactive touchscreen platforms. As members of an interdisciplinary community, we are exploring the interplay between interaction design research and learning. For example, how can we make sure that users’ touch interactions with educational interfaces on these platforms are intuitive, discoverable, and do not interfere with learning outcomes? The “Making the Learning Sciences Count” workshop at CSCL is an ideal setting to share and discuss our evolving understanding at the intersection of interaction design research and learning sciences.”

Interested readers can read the workshop paper here. Participating in the workshop discussion sessions along with other researchers and students working at the juncture of HCI and LS was very helpful and informative for me. I learned about similarities and differences in the publication process followed in the HCI and LS academic communities and how to publish research in both communities. In addition, attending the workshop gave me a chance to get feedback on my thesis research idea from interdisciplinary research community members. I look forward to contributing to this interdisciplinary community and attending a similar workshop in the future.

References

[1] Tom Geller. 2006. Interactive Tabletop Exhibits in Museums and Galleries. IEEE COMPUT GRAPH 26, 5: 6–11.

[2] Kate Haley Goldman, Cheryl Kessler, and Elizabeth Danter. 2010. Science On a Sphere®. Retrieved December 31, 2018 from https://sos.noaa.gov/What_is_SOS/

Read More

Check out our recent blog post on the TIDESS website!

Read More

As part of the ACM SIGCHI 2018 conference, INIT Lab director Lisa Anthony helped co-organize a ‘special interest group’ (SIG) session on child-computer interaction. This SIG is organized by some of the child-computer interaction research community every year. This year, the topic was “Ubiquity and Big Data“: how do we design technology for children in an era of “big data” in which their online activities from an extremely early age may be being monitored, archived, evaluated, and judged? The issue is complex, since parents, schools, and other stakeholders may find beneficial reasons for monitoring and tracking their children’s activities, especially in cases of bullying, self-harm, or risky behaviors; but what are the last impacts of such technologies when the children grow up and already have a digital footprint not of their own making? How do we empower children to own their own online identities but still provide a safe space for growth and learning?

As a result of this SIG, many of the attendees of the event decided to write up a summary of the topics of discussion and submit it to the ACM interactions magazine. It has just recently appeared in the November-December 2018 issue, in the magazine’s forum on “Universal Interactions”. Check out the full article here (available in PDF or HTML format). The article presents the topics of discussion and some insights the SIG attendees came up with, especially the fact that education and transparency are critical values to keep in mind when pushing forward into this space. It is our hope that the article will launch further discussion and awareness of these topics among researchers, educators, designers, and parents.

Read More

On May 29, I completed and passed my PhD dissertation proposal defense. The proposal defense process can vary widely among institutions and even among departments in the same institution, so in this post I outline the process I followed in the CISE department at UF.

The first step I followed was to create a document outlining my proposed work to help my committee understand my plans. There was no prescribed length or format for the document, but mine was around 60 pages. The document contained information about all the work I’ve done up to this point as a PhD student, as well as an outline of all the work I plan to do before graduating. Preparing the document requires a significant amount of work, so I would recommend planning on spending several months working on it before submitting. The document is a crucial part of the proposal process since your committee will use it as a guide to understand the details of your work that you don’t have time to cover in your presentation.

After completing the document, I sent it to my committee. The committee then had several weeks to review the document while I prepared for the next step, which was to give a 45 minute-long presentation about my prior work and my plans for my dissertation work, with 15 additional minutes for public questions.

The proposal defense itself was divided in to four stages. In the first phase, I gave my 45-minute presentation to my committee as well as members of the public who were interested in attending. In the second stage, which lasted around 15 minutes, both the public audience and my committee members asked questions. In the third stage, the public audience was asked to leave and my committee asked questions in private. This phase lasted around 30 minutes. I found that the questions my committee asked in this phase were more difficult and thorough since my committee wanted to be sure they understood my proposed work. For example, my committee asked not only what I planned to do but how I planned to implement specific parts of my dissertation work. In the final stage of the proposal defense, I was asked to leave the room while the committee deliberated on whether I had passed the proposal. The time taken by the committee to deliberate can vary, but for me it was in the range of 20 to 30 minutes. After my committee finished their discussion, I was brought back into the room and was very excited (and relieved) to learn that I had passed my proposal! My committee offered suggestions and feedback on ways to improve my proposed work. For example, some of my committee members suggested specific algorithms that I had not considered that may be useful for my work.

I am entering my fifth year in the PhD program at UF. Now that I’ve defended my proposal, my next major milestone will be my final dissertation defense, which I plan to complete in December 2019. The proposal process was long and difficult, but it provided me a valuable opportunity to crystallize my plans for my dissertation work. Preparing for my proposal forced me to take a more active role in generating ideas for future directions of my research, and now that I’ve passed my proposal I am expected to take more ownership of my work with less involvement from my advisor.

Based on my experience, here are some tips for preparing for your proposal:

* Read proposal documents from students who have already passed their proposal in your department and/or to help get an idea of the scope and formatting to use. I used previous students’ proposals working in a similar area to mine as a model for my document.

* Give as many practice talks as you can with different people. Consider getting people outside of your own lab to make sure it is understandable to a more general audience. Even your committee will have diverse backgrounds and may not be familiar with some concepts related to your research. Practices also are a great time to get a feel for the types of questions you’re likely to get. When I prepared for my presentation, I gave practice talks to friends in other engineering departments to help evaluate how well I was able to explain my work.

* Prepare backup slides to help you answer questions you think you are likely to get.

* Ask your friends and labmates to attend your talk. It helps to see familiar faces and to know you have a lot of support while you’re giving your presentation.

* Bring food and/or coffee for the audience, especially your committee.

* Try not to get too stressed out during your presentation. Ultimately, everyone wants to see you succeed.

If you’re about to propose your dissertation, good luck!

Read More