Sunday, November 13, 2011

Paper Reading #31- Identifying emotional states using keystroke dynamics

Title: Identifying emotional states using keystroke dynamics
Reference Information:
Clayton Epp, Michael Lippold, and Regan Mandryk, "Identifying emotional states using keystroke dynamics". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Clayton Epp- Senior Software Developer at University of Toronto, Saskatchewan, Canada. Computer Software.
Michael Lippold- Principal / Systems Engineer of Smartacus, Inc. Portland, Oregon.
Regan Mandryk- I’m an Assistant Professor in the Department of Computer Science at the University of Saskatchewan.
Summary:
  • Hypothesis: If the authors can conduct studies on keystrokes made by users and determine stroke patterns, then it is possible to determine emotional states of the user while stroking.
  • Methods: Two main areas of research are the focus for this paper: data collection and data processing to determine emotional state. The authors used ESM (experience-sampling methodology) for their studies. In ESM, users are asked to periodically record their experiences along with their every day activities.This allows data collection "in the moment" rather than retrospectively. Throughout their day, users would be prompted to type text and record their emotional state, as well as be prompted with keystrokes that they've made from the previous 10 minutes to the prompt. Users could potentially opt out of this collection if they didn't choose to participate at that time for whatever reason. Once data was collected, three features of the data were considered (keystoke/content, emotional state classes, and additional data points). Keystroke features included key press/release events as well as associated timestamps. Emotional state classes refers to the labeling of the users responses with a discrete emotional class via Likert scale responses. Additional data points refers to additional contextual information for each user (i.e. a user taking a break from the computer, a user switching to the mouse, finding out all process names the user is working on, etc).
  • Results: The authors used decision trees and a form of supervised learning to classify user data into emotional classifiers. As far as classifying, some content was thrown away because of skewed data (data belonging predominantly to extremes on the Likert scale). The authors showed successfully that their system can correctly classify at least two levels of seven emotional states.
  • Content: The authors wanted to create a system to classify user's emotional states simply be analyzing keystroke patterns and features. They wanted to make an inexpensive, unintrusive system as well. The authors were able to create a system to correctly classify emotional states on some level to a high degree of accuracy (77-84%). The authors had to throw away a lot of data because of a lack of responses from users, they had to agglomerate data because of skewed data, and they had to discard some classifiers in their results because of accuracy.
Discussion:
This paper was all right. I like the idea of being able to classify a user's emotion by keystrokes, but I doubt the accuracy of such a system because there are so many factors that go into classifying that. For example, I can type fast because I am angry or because I just naturally type fast and don't necessarily have any extreme emotion at the moment. This system could definitely use some improvement, but I suppose they are on the right track. The authors, in my opinion, achieved their goals somewhat because they found a foundation for classification, but their classification isn't anywhere near solid. They did, however, find a lot of areas for improvement. I'll believe this kind of technology and its success when I see it.

No comments:

Post a Comment