Category: TIDESS

As part of our ongoing studies with the PufferSphere spherical interface, the TIDESS team has decided to create a prototype that implements many of the same features present in our tabletop prototype, which we’ve discussed previously. This will allow us to directly compare the two platforms and see how the spherical nature of the interface affects how users interact with and learn from the device.

One of the primary additions to the prototype that we had to build is the creation of a new gesture library. A gesture library is a set of gestures that our sphere will recognize, and which will trigger relevant actions. Our tabletop prototype was developed using the built-in GestureWorks gesture library to allow users to manipulate objects in the prototype. However, the Puffersphere PufferPrime API does not currently have an existing gesture library for us to use. This means that, in its default state, the sphere does not allow for objects to be dragged and does not support other basic gestures (zooming, swiping, long-tap). These gestures were all supported by the tabletop form of the prototype, so we chose these as the main four gestures for our gesture library.

To support the same set of gestures available on the tabletop, we defined each gesture in the following way:
• Drag – user moves an object while maintaining contact the entire time
• Swipe – user moves an object, and after they release contact the object continues to move
• Zoom – user enlarges an area by pinching outwards (from two contact points)
• Long-Tap – user holds their finger(s) in one place for an extended period

To create this gesture library, we looked at existing gesture libraries for flatscreen interfaces (including the one we used for our tabletop prototype) and determined how we can implement our own versions of these gestures on the spherical display. For example, to implement the long-tap gesture, we added a timer to determine how long a user had kept their finger in one area and a radius to define this area. If the timer reached a certain threshold and the user had not moved their finger outside the radius, then we considered that a long-tap.

We tested this prototype at the Florida Museum of Natural History as a part of an unstructured study, where we recorded people naturally interacting with the sphere. The sphere was deployed for a week, during which we recorded participants’ interactions with the sphere through audio, video, and touch logs.

As a third-year undergraduate at the University of Florida, I have been able to work with many new and innovative technologies during my involvement on the TIDESS project, such as the PufferSphere spherical display. Developing applications for the sphere has allowed me to further improve my programming skills, especially my ability to work in a large, existing codebase. When we analyze the data, I look forward to seeing how users interacted with the prototype that we developed!

Read More

Check out our recent blog post on the TIDESS website!

Read More

Check out our recent blog post on the TIDESS website!

Read More

Check out our recent blog post on the TIDESS website!

Read More

Check out our recent blog post on the TIDESS website!

Read More

The TIDESS Team has been investigating the natural and intuitive ways users interact with spherical displays by continuing to prototype on the PufferSphere. In my understanding as a new member of the team, the TIDESS project primarily aims to investigate ways in which children and adults interact with large touchscreen interfaces, such as spherical displays, to help us design interactive touchscreen exhibits for public science learning. More specifically, the project focusses on studying new types of touch interactions, or gestures, that allow users to manipulate and explore content displayed on the sphere.

To allow us to study how users interact with the PufferSphere and investigate natural user gestures, our team is in the process of developing a PufferSphere prototype to support future studies.  I have been primarily responsible for developing this prototype while part of this project this summer. In development of the prototype, I have faced issues with distortion of content shown on the sphere or more generally displaying 2D images and videos on a three-dimensional space. A simple solution that I have found is to limit videos and images to small windows on the sphere’s surface to decrease the effect of the distortion. I also learned that, when prototyping, especially on a new platform such as the PufferSphere, it is good practice to start your program with basic concepts and slowly add components. I found this idea to be helpful because, frequently, objects appear differently on the sphere than the desired outcome, and this allows for quick correction of any errors while preserving the already working components.

Currently, I have been attempting to add elements to the prototype that make it more visually appealing and provide motivation for participants to complete the study (e.g., “gamification” elements as popularized in Brewer et al, IDC 2013). I have also been working on logging touch data for the prototype. The PufferSphere already records touch information such as the start and end of a touch, as well as the longitude and latitude of the touch on the sphere. We want to record these pieces of data, with the time they occurred, in a CSV file to obtain the speed of a swipe/drag gesture or the length of time that a touch lasted. Logging touch data allows us to calculate this information after the study to do analysis across users. Other improvements to the prototype will be made after running pilot studies.

As an student from the IMHCI REU Program in the INIT at University of Florida, I have had many learning opportunities this summer. First and foremost, I have learned to be more proficient in task management. This was a necessity in my first research experience as I quickly saw that some of my assigned tasks had priority over others and deadlines, while at other times, I had independent/free time to read research papers relevant to the project. In the beginning of the summer, I was also timid to collaborate with others frequently. Now, I find it very casual to ask peers and mentors questions and even coproduce works for a multitude of assignments.  I am currently a third-year student at Elon University majoring in Computer Science (BS) and Mathematics (BS) with a minor in Physics. As the summer continues, I look forward to seeing how children will interact with the sphere during our study sessions.

 

References

  1. Brewer, R., Anthony, L., Brown, Q., Irwin, G., Nias, J., and Tate, B. Using gamification to motivate children to complete empirical studies in lab environments. Proc. IDC’2013, ACM Press (2013), 388–391.
Read More

Check out our recent blog post on the TIDESS website!

Read More

The TIDESS team has begun qualitative analysis on the data we collected from the tabletop user study to try to characterize the processes of how people learn from data visualizations on interactive tabletop displays. We collected both audio and video recordings, as well as logs of all the touch interactions(gestures) participants did with the prototype, during the study sessions of the user study.

To start, the team transcribed the user study session videos so that the transcriptions can be coded, or labeled with interesting behaviors and spoken utterances that occurred. We then timestamped the transcripts so that the participants’ words could be matched up with the gesture data that came from the application on the display. The team constructed a code book for the utterances by reviewing prior literature in learning sciences and collaboration learning, as well as some insights from our own past work and the goals of this study itself. The codes in the code book allow the team to characterize the group dynamics, collaborative work, and group meaning making. To refine the codebook, all of the team members coded sample transcripts, and we discussed any disagreements during our team meetings until we agreed on an initial coding procedure.

We are using MaxQDA, a qualitative data analysis program, to facilitate the coding. MaxQDA allows us to take all of the codes from the code book and use those codes to classify the speech and actions in the transcripts. We’ve already begun coding the transcripts, and then next we will analyze the coded transcripts to see where the interactions with the prototype helped or hindered group learning.

I am a 3rd year Computer Science student at Brooklyn College, in the INIT lab for the summer. It has been really interesting to see how people interacted with the interactive touchscreen tabletop display. Working on this team helping to analyze the data has been fun, and I look forward to continuing the analysis of the data.

For more project information, please visit the TIDESS website.

Read More

Check out our recent blog post on the TIDESS website!

Read More

Check out our recent blog post on the TIDESS website!

Read More