Reference Info:
Jeremy Scott, David Dearman, Koji Yatani, and Khai Truong. "Sensing foot gestures from the pocket". UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology ACM New York, NY, USA ©2010 ISBN: 978-1-4503-0271-5.
Author Bios:
Jeremy Scott- Dr. Scott received his B.Sc., M.Sc., and Ph.D. in Pharmacology & Toxicology from the University of Western Ontario.
David Dearman- David is a Ph.D student in the department of Computer Science at the University of Toronto.
Koji Yatani- Koji is a Ph.D. candidate working with Prof. Khai N. Truong at the Dynamic Graphics Project, University of Toronto. His research interests lie in Human-Computer Interaction (HCI) and ubiquitous computing with an emphasis on hardware and sensing technologies.
Khai Truong- Khai is an Associate Professor at the University of Toronto in the department of Computer Science.
Summary:
- Hypothesis: If a mobile device is located in someone's pocket at any time, then explicit foot movements can be defined as eyes-and-handsfree input gestures for interacting with the device with high degrees of accuracy.
- Methods: The authors conducted an initial study to test users' abilities to lift and rotate their leg to perform foot-based interactions. A second study conducted tested a built-in accelerometer to recognize users' gestures using machine learning. They then conducted a study on participants asking them to make selections on menu items on the phone with foot-based gestures (four different ways of rotations/movements). They had computer software modeling users' feet and logging their movements.
- Results: The initial study results showed that users can perform gestures more accurately while lifting their toes versus lifting their heels. It was also more comfortable to lift their toes versus their heels. The second study results featured recognition with 86% accuracy. Results from the trial containing participants making selections and performing gestures on the phone with their feet revealed that the less the foot had to move/rotate the faster selections were made. It was noted that across the board, hell rotation was by far the most comfortable way of using the foot as a gesture creator. However, according to the results, the toe rotation was the most efficient way of making selections and performing gestures (the error was the smallest with these trials).
- Contents: These authors wanted to be able to create some device that could be interfaced with without having to demand a majority of attention, focus, and concentration to use while multitasking. Therefore, the authors created a phone with a built-in accelerometer to recognize foot-based gestures that users can interface with and use without having to even receive visual feedback. This way, users can multitask effectively while still using features on their phone.
I had a hard time reading this article honestly. I think this idea is just too weird and will never become a mainstream feature of technology. There is no visual feedback, and users are always going to prefer using touch/buttons to foot (of all ways...) interfacing. It is not natural and will still have to have users devote a decent level of concentration to using it. This means that their point about multitasking with this feature becomes pointless. I would never personally use this device myself or the technology. I do believe, however, that the authors did achieve their goals. They were able to use a foot to draw gestures and interface with a phone with a high degree of accuracy. I'm sure that they are proud of what they did, even if not too many other people care that much.
No comments:
Post a Comment