Reference Information:
Andrew Wilson and Hrvoje Benko . "Combining multiple depth cameras and projectors for interactions on, above and between surfaces". UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology ACM New York, NY, USA ©2010 ISBN: 978-1-4503-0271-5.
Author Bios:
Andrew Wilson-I was a graduate student at the University of North Carolina at Chapel Hill's Department of Computer Science from August 1997 - December 2002. I've since graduated and moved on to Sandia National Laboratories where I'm working on GPU-accelerated visualization of gigabyte-sized data sets.
Hrvoje Benko- I am a researcher at Adaptive Systems and Interaction Microsoft Research. My research is on novel surface computing technologies and their impact on human-computer interaction. Prior to working at Microsoft, I received my PhD at Columbia University, working on augmented reality projects that combine immersive experiences with interactive tabletops.
Summary:
- Hypothesis: "The rich, almost analog feel of a
dense 3D mesh updated in real time invites an important
shift in thinking about computer vision: rather than struggling
to reduce the mesh to high-level abstract primitives,
many interactions can be achieved by less destructive
transformations and simulation on the mesh directly." - Methods: LightSpace allows users to interact with virtual objects within the realm of the environment. It supports through-body interactions, between-hands interactions, the picking up of objects, spatial interactions, and other features using unique depth analysis and algorithms. It is a hardware device suspended from the ceiling consisting of multiple depth cameras and projectors to track movements and interactions within the "smart room" environment. The cameras and projectors must be calibrated to share the same 3D environment.
- Results: Via a user trial and feedback session lasting 3 days, the authors gathered that 6 users was the maximum (currently) number of users allowed by the "smart room" currently to still maintain accurate interaction representations. Also, the system seemed to slow down once more than 2 users were using it simultaneously. Occasionally, interactions from one user were never registered because another user was blocking a camera, so LightSpace never picked it up. Once they began to interact with the device, however, users felt that it was very natural and few had problems actually interacting.
- Content: The authors presented a device that allowed users to live in a "virtual environment" and interact spatially with objects within a certain range. The authors explored different spatial interactions with new depth tracking techniques, and it seemed to produce fruitfully.
I think this is really cool to have and play around with. But just like recent papers we have read, I fail to see any applications in the real world for this technology so far. Their ideas with the depth tracking and figuring algorithms for interactions of different sorts was genius. I didn't like much how users could be blocked so easily from being registered, however. All it took was another user getting in the way of a camera angle to completely negate an action. I believe that the authors did achieve their goals. They set out to create a "smart room"- and that's what they got. If I had the resources I would definitely just buy one of these for a room in my house and just go nuts on it like I was working on my PC.