Citation:
Morrison-Smith, S., Aloba, A., Lu, H., Benda, B., Esmaeili, S., Flores, G., Smith, J., Soni, N., Wang, I., Joy, R., Woodard, D.L., Ruiz, J., and Anthony, L. 2020. MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and Voice. Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI β20), October 25β29, 2020, Virtual event, Netherlands. ACM, New York, NY, USA, 8 pages, to appear. [PDF]
Abstract:
βThe future of smart environments is likely to involve both passive and active interactions on the part of users. Depending on what sensors are available in the space, users may make use of multimodal interaction modalities such as hand gestures or voice commands. There is a shortage of robust yet controlled multimodal interaction datasets for smart environment applications. One application domain of interest based on current state-of-the-art is authentication for sensitive or private tasks, such as banking and email. We present a novel, large multimodal dataset for authentication interactions in both gesture and voice, collected from 106 volunteers who each performed 10 examples of each of a set of hand gesture and spoken voice commands chosen from prior literature (10,600 gesture samples and 13,780 voice samples). We present the data collection method, raw data and common features extracted, and a case study illustrating how this dataset could be useful to researchers. Our goal is to provide a benchmark dataset for testing future multimodal authentication solutions, enabling comparison across approaches.β