Monday, October 31, 2011

Paper Reading #26- Embodiment in brain-computer interaction

Title: Embodiment in brain-computer interaction
Reference Information:
Kenton O'Hara, Abigail Sellen, and Richard Harper, "Embodiment in brain-computer interaction". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bio:
Kenton O'Hara- Microsoft Researcher in Cambridge in the Department of Socio Digital Systems.
Abigail Sellen- Abigail Sellen is a Principal Researcher at Microsoft Research Cambridge in the UK and co-manager of Socio-Digital Systems, an interdisciplinary group with a focus on the human perspective in computing.
Richard Harper- Richard Harper is Principal Researcher at Microsoft Research in Cambridge and co-manages the Socio-Digital Systems group.
Summary:
  • Hypothesis: If BCI technology can be effectively used and harnassed, then it can be used outside of the realm of just video gaming and possibly with the combination of other technologies.
  • Methods: The authors decided to test the effectiveness of BCI with a video game called MindFlex. In the game, you keep a ball floating by concentrating (not just on the ball..on anything really. Higher concentration sends a higher "fan level" to the game. The higher the fan level, the higher the ball was kept up). The authors conducted a user study on participants (of at least two in a normal social setting) playing the game in a trial run to measure social relationships, interactions with the game, and coordination of the gameplay.
  • Results: The results of the video stream and data from gameplay indicate strategies were taken by participants to keep the ball up (they would hold certain poses, do certain things, etc...because they thought it would help them). Other things such as gaze, intent, vision, imagination, proximity to the ball, and gestures all played roles in gameplay. Spectators also played roles in how the players played the game. Sometimes they would joke with the player or sometimes they would help them play.
  • Content: The authors wanted to create an environment where the limits and applications of a BCI game were recordable. To do this, they took a game called MindFlex and gave it to participants to play for a week to later study how the game was played in terms of group size, strategy, focus, gestures, time spent playing, etc. They found that each user would approach the game uniquely and had a different style of play. Even the spectators of the game had a role to play in the gameplay of the player.
Discussions:
I thought this was kind of cool. I think I would have fun if I were able to play a game like this. I wonder if it would be difficult or not. I noticed that some players had some difficulty in controlling the ball sometimes. I wonder what kind of gestures / movements / strategies I would take to do well in the game. I don't think spectators would have an effect on me at all. I think that the authors definitely showed the the power of BCI technology is tangible and applicable to (at least) video gaming. Like the authors said, there are many other avenues of application for technologies like BCI. I think the authors achieved their goals for sure.

Paper Reading #25- Twitinfo: aggregating and visualizing microblogs for event exploration

Title: Twitinfo: aggregating and visualizing microblogs for event exploration
Reference Information:
Adam Marcus, Michael Bernstein, Osama Badar, David Karger, Samuel Madden, and Robert Miller, "Twitinfo: aggregating and visualizing microblogs for event exploration". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bio:
Adam Marcus- I am a graduate student at the MIT Computer Science and Artificial Intelligence Lab (CSAIL), where I am part of the Database group and the Haystack group. My advisors are Sam Madden and David Karger.
Michael Bernstein- I am a final-year graduate student focusing on human-computer interaction at MIT in the Computer Science and Artificial Intelligence Lab. I work with Professors David Karger and Rob Miller.
Osama Badar- Graduate student at MIT in the CSAIL.
David Karger- I am a member of the Computer Science and Artificial Intelligence Laboratory in the EECS. department at MIT.
Samuel Madden- Sam is an Associate Professor in the EECS department at MIT. He is also a part of the CSAIL group.
Robert Miller- I'm an associate professor in the EECS department at MIT, and leader of the User Interface Design Group in the Computer Science and Artificial Intelligence Lab.
Summary:
  • Hypothesis: If the authors can organize and effectively communicate the timeline-based display information of TwitInfo, then Twitter's existing implementation of information will be able to be more effectively manipulated and enhanced.
  • Methods: The authors create an algorithm to automatically "browse an event" and label the created event (which was done using a keyword) if a certain event is tweeted about over a certain amount of times per time unit. This will allow the activity on a timeline to "peak" to show that users are tweeting about an event. The authors were also able to implement a "sentiment analysis" about certain events (i.e. is this event "positive" or "negative" judging by user's comments and feedback about the event). TwitInfo also allows the creation and browsing of subevents. TwitInfo was evaluated against three soccer matches as well as a month-long collection of raw data. The authors then recruited 12 people to evaluate the UI of TwitInfo.
  • Results: TwitInfo is biased by the Twitter users' interests. It was effective in measuring events for the soccer matches as well as the earthquakes. Occassionally, TwitInfo returned false positives such as anyone tweeting about a soccer term that wasn't necessarily about the soccer match being observed. In the study of understanding the UI, the results concluded that TwitInfo is an effective source for news without even any prior knowledge of events. Users were able to find events on the timeline and read summaries of sub-events. Users didn't necessarily agree with the sentiment analysis, however.
  • Content: The authors wanted a better way to browse Twitter events and effectively get caught up on events and relevant news for a keyword event. The created a UI which incorporated a timeline-based display where users could browse events with "high peaks" of interest and get caught up on them. The authors also created an engine to determine whether an event was a positive thing or a negative thing for each part of the world.
Discussions:
I think this is very interesting, but as an avid anti-Twitter-er, I won't be using this technology. I have nothing against Twitter, I just got a Facebook first. The way the authors were able to take all of the information from Twitter and personal tweets and organize it all into meaningful, timelined events that users could browse is a very good idea though. I believe the authors weren't compltely happy with their results as far as the user evaluation and journalist evaluation of TwitInfo went, but I'd say that they achieved their goals. The technology that they created is definitely better than what was already in-place for Twitter, anyway.

Thursday, October 27, 2011

Paper Reading #24- Gesture avatar: a technique for operating mobile user interfaces using gestures

Title: Gesture avatar: a technique for operating mobile user interfaces using gestures
Reference Information:
Hao Lu and Yang Li, "Gesture avatar: a technique for operating mobile user interfaces using gestures". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Hao Lu- I am a graduate student at University of Washington Computer Science & Engineering and DUB Group. I work on technologies that change and improve the way how people interact with computers.
Yang Li- Yang is a Senior Research Scientist at Google. Before joining Google's research team, Yang was a Research Associate in Computer Science & Engineering at the University of Washington and helped found the DUB (Design:Use:Build), a cross-campus HCI community. He earned a Ph.D. degree in Computer Science from the Chinese Academy of Sciences, and then did a postdoctoral research in EECS at the University of California at Berkeley.
Summary:
  • Hypothesis: If using gesture methods to operate mobile interfaces performs optimally over finger-based touch input, then in a dynamic, mobile environment, it will prevail.
  • Methods: To correctly and accurately model a user's intent for interaction on a page, the shape of the gesture and the distance from the gesture to the objects aer taken into consideration. After the user draws a gesture, an "avatar" bounding box appears translucently behind the gesture. The user can then interact with that avatar the same way as they would their desired object. Only this avatar is larger than the object on the page so it will be easier to interact with. If the object that gets highlighted is not the desired object, the user may disgard that by using several techniques. The authors conducted a study among 20 participants involving half of them learning gesture avatar and then shift, and vice versa. They were asked to complete tasks using each software. For some tasks, users were sitting down, and for others, they were on a treadmill (to simulate using the device while walking).
  • Results: From the study, the authors found that Gesture Avatar was slower than Shift for 20px targets, but faster for 10px targets. Users using Shift while sitting were much faster than while they were walking on the treadmill, but for Gesture Avatar there was no significant difference in time reported. Gesture Avatar's error rates were lower than Shift's for all target sizes. 10 out of the 12 participants preferred using Gesture Avatar over Shift. The authors found that increasing the size of unique characters that would be drawn had no affect on Gesture Avatar's performance. Gesture Avatar supports one-shot interaction and acquiring moving targets also. There have been some integration issues with using Gesture Avatar in existing systems, however.
  • Content: The authors wanted to create a way to effectively use your mobile device accurately while in a mobile environment. Touch-activated UIs can be highly inaccurate because of "fat-finger" and "occulation" problems. To correct for these, the authors created a technique to allow the user to "call" a desired object to interact with by "describing it" with a gesture in order for Gesture Avatar to enlarge it so it can be mroe accurately interacted with (and to ensure the correct object is being interacted with). After studies, the authors showed that Gesture Avatar is superior to other similar existing systems.
Discussion:
I thought this was kind of a neat idea. At first, I thought "Oh man, Yang Li is at it with another dumb gesture recognition idea again..." but I was not entirely correct. This time, I believe the proposed system can be extremely useful- especially in a mobile environment. I also liked how the different ways to ensure the correct object is selected is implemented as well. Gesture Avatar allows for disregarding incorrect objects, finding the next best match, interacting more accurately with desired objects, and allowing existing features of a UI to continue to be used if the user didn't want to use Gesture Avatar techniques on a particular page. I might consider using something like this if I did have a smartphone / touch-screen phone. I think Yang and Hao achieved their goals. It always feels good to know something you created out-performs something in existance already.

Monday, October 24, 2011

Paper Reading #23- User-defined motion gestures for mobile interaction

Title: User-defined motion gestures for mobile interaction
Reference Information:
Jamie Ruiz, Yang Li, and Edward Lank. "User-defined motion gestures for mobile interaction". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Jamie Ruiz- I'm a fifth-year doctoral student in the Human Computer Interaction Lab in the Cheriton School of Computer Science at the University of Waterloo. My advisor is Dr. Edward Lank.
Yang Li- Yang is a Senior Research Scientist at Google. Before joining Google's research team, Yang was a Research Associate in Computer Science & Engineering at the University of Washington and helped found the DUB (Design:Use:Build), a cross-campus HCI community.
Edward Lank- I am an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo.
Summary:
  • Hypothesis: If smartphones contain devices that allow for 3D tracking and sensing of the phone, then a set of optimal gestures to invoke commands with very natural mapping exists.
  • Methods: The authors conducted a "guessability study" on participants by having them perform tasks and asking them what gesture / motion allows for the optimal mapping. For the experiment, users were told to treat the smartphone as a "magic black brick" because the authors removed all recognition technology from it so the users wouldn't possibly be influenced by anything of that nature. They were told to create gestures for performing tasks from scratch. The participants were recorded via audio and video. Data was also collected from a software on the phone for a "what was the user trying to do?" perspective. The study was conducted over all users who have previous and relevant smartphone experience.
  • Results: Users most commonly used gestures not related to smartphone gestures. For example, viewing the homescreen for the smartphone had a very popular gesture as a result involving shaking the phone (as with an etch-a-sketch). Generally, the results gathered from the authors had a general agreement among gestures as well as the reasoning for the gestures. The authors therefore were allowed the luxury from the video "out-loud" process data to understand the user's thought process when creating the gesture. Tasks that were considered to be "opposite" of each other had similar gesture motions but were performed in "the opposite direction" for example. For example, zooming in and out from a map would involve moving the phone closer to you or farther away, respectively. The authors then went into a detailed study about what kind of gesture was created for performing tasks and what was involved / how the phone was treated for the gesture. "Agreement scores" were then calculated for each gesture to quantitatively find how "good" each individual gesture users made were.
  • Content: The authors wanted to be able to create more natural, easy-to-use gesture sets for interactions with motion for smartphones versus plain gesture interactions. To create such a set, the authors studied volunteers who use smartphones which created their own gestures for performing tasks on smartphones. These gestures were than individually taken and measured against every other participants to see how "good" each gesture created was. From here, the specific kind of interaction and what all was involved/manipulated was taken into account also to see what general kind of pattern was recognizable.
Discussion:
I love studies like this where there is no field-specific jargon, no technical processes, no high-level fancy vernacular to have to learn, and no difficult end-goal in mind- this was very straightforward. The authors wanted to create a way to have simple and optimal motion gestures to interact with a smartphone. What better way to do it than to let a sample set of smartphone users "create" such a set? I definitely believe the authors achieved their goals and I would definitely consider using a smartphone with motion-based capabilities of the motions were based from studies like these. I enjoyed reading this paper.

Paper Reading #22- Mid-air pan-and-zoom on wall-sized displays

Title: Mid-air pan-and-zoom on wall-sized displays
Reference Information:
Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis, and Wendy Mackay. "Mid-air pan-and-zoom on wall-sized displays". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Mathieu Nancel- I am a Ph.D. student in Human-Computer Interactions in the in | situ | team (INRIA/LRI) since september 2008. I work on distal interaction techniques for visualization platforms: precise pointing, navigation in very large datasets, menu techniques, etc.
My Ph.D. supervisors are Michel Beaudouin-Lafon and Emmanuel Pietriga.
Julie Wagner- She is a Ph.D Student at INRIA. Her supervisor is Wendy Mackay.
Emmanuel Pietriga- Chargé de Recherche - CR1, interim leader of INRIA team In Situ.
Full-time research scientist working for INRIA Saclay - Île-de-France.
Olivier Chapuis- Chargé de recherche (Research Scientist) CNRS at LRI (CNRS & Univ. Paris-Sud).
Member and team co-head (by interim) of the InSitu research team (LRI & INRIA Saclay Ile-de-France).
Wendy Mackay- She is currently a Research Director with INRIA Saclay in France, currently on sabbatical at Stanford University. She runs a research group called in|situ|. Their focus is on the design of innovative interactive systems that truly meet the needs of their users.

Summary:
  • Hypothesis: If the authors can study and trial different methods for large-scale wall-display-sized navigation, then they will create an optimal gesture set suitable for real and complex applications dealing with such a problem space. The authors had 7 separate hypotheses about each particular area of their design, but I combined this to make one general hypothesis.
  • Methods: The authors conducted a series of pilot tests, pursued research, and performed empirical studies to narrow down all possible gesture and input methods for this system down to 12. They took into account performance (cost and accuracy), fatigue over periods of use of the input, ease of use, and natural mapping. The authors conducted an experiment to evaluate each of the 12 factors they discussed using in their system to see which were optimal. The authors had their ideas about which of the 12 were optimal to begin with, but they tested their ideas to see if this was the case.
  • Results: After the experiment, the authors found that the two-handed gesture tasks were performed faster than the one-handed gesture tasks, involving smaller muscle groups for input interactions improves performance (providing higher guidance further improves it), and linearly-performed tasks were generally performed faster than circular ones. Circular gestures were slower because it was more often that participants overshot their target with circular gestures than with linear ones. After receiving feedback from the participants, they found that what the users were saying were (in general) in agreeance with their results from the data gathered.
  • Content: The authors wanted to study a field that has historically received little attention. They wanted to figure out a way to effectively explore a wall-sized interaction surface with respect to performance, ease, and minimal fatigue. The authors eventually found (via studies and pilot testing) an optimal set of interactions and gestures suitable for such an interaction space.
Discussion:
I thought this paper was pretty interesting. This field of study has a lot of real-world applications such as (as mentioned in the paper) crisis management/response, large-scale construction, and security management areas. I thought their studies were accurate, appropriate, and extensive enough to gather relevant and meaningful data for designing a suitable system for the display. They definitely achieved their goals, in my opinion. I would have loved to have been a participant in this particular experiment; I think it would have been fun. I would consider using a technology like this if I was a billionaire and wanted to play a video game on my wall where I was a controller or something.

Thursday, October 20, 2011

Paper Reading #21- Human model evaluation in interactive supervised learning

Title: Human model evaluation in interactive supervised learning
Reference Information:
Rebecca Fiebrink, Perry Cook, and Dan Trueman. "Human model evaluation in interactive supervised learning". CHI '11: Proceedings of the 2011 annual conference on Human factors in computing systems ACM. New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Rebecca Fiebrink- I will be joining Princeton as an assistant professor in Computer Science and affiliated faculty in Music in September, 2011. I have recently completed my PhD in Computer Science at Princeton, and I will be spending January through August 2011 as a postdoc at the University of Washington. I work at the intersection of human-computer interaction, applied machine learning, and music composition and performance.
Perry Cook- Professor Emeritus* (still researches but no longer teaches or accepts new graduate students) at Princeton University in the department of Computer Science and Dept. of Music.
Dan Trueman- Professor in the department of Music at Princeton University.
Summary:
  • Hypothesis: If the user can be allowed to iteratively update the current state of a working machine learning model, then the results and actions taken from that model will be improved (in terms of quality).
  • Methods: The authors conducted three studies of people applying supervised learning to their work in computer music. Study "A" was a user-centered design process, study "B" was an observatory study in which students were using the Wekinator in an assignment focused on supervised learning, and study "C" was a case study with a professional composer to build a gesture-recognition system.
  • Results: From the studies, the authors gathered results and analyzed all of the results. From study "A", the authors saw that participants iteratively re-trained the models (by editing the training dataset), for study "B", the students re-trained the algorithm an average of 4.1 times per task, and the professional from study "C" re-trained it an average of 3.7 times per task. Cross-validation wasn't used in study "A", but for studies "B" and "C", cross-validation was used an average of 1 and 1.8 times per task, respectively. Direct evaluation was also present in the evaluation metric of the system. This was used more frequently than cross-validation. Participants in "A" strictly used this measure, while the students and the professional in studies "B" and "C" used direct evaluation on an average of 4.8 and 5.4 times per task, respectively. Using cross-validation and direct evaluation, users were able to receieve feedback on how their actions affected the outcomes. The overall results were that users were able to fully understand and use the system effectively. The wekinator allowed users to create more expressive/intelligent/quality models than with other methods/techniques.
  • Conent: The authors wanted to create some method to allow users to provide feedback to the system iteratively while it is being used in order to hopefully create some iterative machine learning mechanism. The authors conducted user studies to test their hypothesis and collected results that suggested that the methods provided by the authors to perform tasks were superior to other techniques.
Discussion:
I thought this article was sort of interesting. It is a really good idea to have a system where you can tell the machine what is "good" or "bad" before it even spits out the final result. Being able to re-train your algorithm (mid-computation) is really beneficial to have. The cost-benefit for it seems reasonable, so I could see this idea becoming more widespread before long. The authors, in my opinion, definitely achieved their goals and proved their hypothesis to be true. I didn't understand cross-evaluation or direct evaluation in terms of the actual methods too much, but I know those factors were taken into consideration when collecting data for "satisfaction" of the system.

Monday, October 17, 2011

Paper Reading #20- The aligned rank transform for nonparametric factorial analyses using only anova procedures

Title: The aligned rank transform for nonparametric factorial analyses using only anova procedures
Reference Information:
Jacob Wobbrock. Leah Findlater, Darren Gergle, and James Higgins. "The aligned rank transform for nonparametric factorial analyses using only anova procedures". CHI '11: Proceedings of the 2011 annual conference on Human factors in computing systems. ACM, New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Jacob Wobbrock- He is an Associate Professor in the Information School and an Adjunct Associate Professor in the Department of Computer Science & Engineering at the University of Washington.
Leah Findlater- Is an Undergraduate Research Advisor for the Universitty of Washington. Has served on projects related to HCI.
Darren Gergle- He is an Associate Professor in the departments of Communication Studies and Electrical Engineering & Computer Science (by courtesy) at Northwestern University. He's also direct the CollabLab: The Laboratory for Collaborative Technology.
James Higgins- A professor in the department of statistics at Kansas State University.
Summary:
  • Hypothesis: If the Aligned Rank Transform (ART) method can be implemented for statistical analysis and nonparametric data, then studies concerning interactions and other complex factors can be made more quantifiable and easier to manage.
  • Methods: The ART method "aligns" the data before applying averaged ranks. Then, the common ANOVA procedures can be used. ARTool and ARTweb are also tools for assisting in using the ART method on relative studies. Unlike other statistical methods, ART doesn't violate ANOVA assumptions, inflate type I error rates, forego interaction effects, or disregard "too complex" of study data. the authors also made ART relatively easy to interpret- anyone familiar with ANOVA can interpret the data. The ART method can also consider "N" factors in its analysis. ART has 5 steps: 1) computing residuals 2) computing estimated effects for all main and interaction effects 3) compute aligned response Y' 4) assign averaged ranks Y'' 5) perform a full factorial ANOVA on Y''. Correctness is ensured on ART by ensuring that each column of data for Y' sums to 0 and by showing that the full-factorial ANOVA performed on Y' show all effects stripped out except for the ones which the data were aligned for initially. ARTool and ARTweb allow for simple interactions and clear data results.
  • Results: Findlater collected satisfaction ratings on accuracy and interface from 24 participants. The authors found that their creation was satisfactory among both fields with the users. The authors found some limitations for ART also (extreme skew, tied ranks, not randomized designs). 
  • Content: The authors wanted to create a way to handle complex data in experiments using familiar tests and data analysis already in place. ART is favored over traditional methods because of its simplicity and usability. ARTweb and ARTool allow for convenience of using ART on data and models with a computer and make ranking and alignment simple.
Discussion:
I know next to nothing about statistics. I took one course on it and forgot nearly everything relatively fast. I got lost pretty quickly in the paper, basically. It seems, however, that the field of HCI needed a unique way to interpret data in studies concerning (say) measuring if a device was "good" or not (whether users deemed it as "satisfactory" based on certain metrics). This seems to be a great idea and is also coupled with programs that allow for even easier usability with ART. This way, anyone with a basic knowledge of ANOVA and statistics can understand the ART results. I definitely believe that the authors achieved their goals because their method and programs are probably used commonly in these target types of experiments. It is good for authors to be able to see limitations on their own inventions, too. This way they can disclaimer it or they can release a "2.0" to correct some of these issues. If I needed to do some study concerning HCI interactions, I would definitely consider using ART.

Paper Reading #19- Reflexivity in digital anthropology

Title: Reflexivity in digital anthropology
Reference Information:
Jennifer Rode, "Reflexivity in digital anthropology". CHI '11: Proceedings of the 2011 annual conference on Human factors in computing systems ACM. New York, NY, USA. ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Jennifer Rode- She is an Assistant Professor at Drexel's School of Information, as well as a fellow in Digital Anthropology at University College London.
Summary:
  • Hypothesis: If people can better understand  how to effectively use the different types of ethnographies, then the field of HCI will become clearer and more advanced as well as suited to target user audience.
  • Methods: Jennifer Rode really had no methods in her paper. She argued that there are different types of ethnographies and defined each of them. She also argues that in each scenario a certain type of ethnography is optimal. The results section outlines these in more detail.
  • Results: She defines reflexivity as having four main characteristics: intervention is seen as a data gathering opportunity, understanding how the method for data gathering actually impacts the data that is gathered, find structural patterns in what was observed, and extending theories. She claims that having reflexivity is crucial in ethnographies. She defines three types of ethnographies: realist, confessional, and impressionistic. A realistic ethnographical approach has four keys to it: experimental authority (in which the ethnographer, over a period of time, becomes familiar with his new environment and can no longer have to make inferences about it- rather make testable hypotheses), its typical forms, the native's point of view (from which the ethnographer hopes to study closely and represent accurately), and interpretive omnipotence (the author has the final say on what is written and recorded- basically he controls how the target group is perceived and represented). A confessional ethnographical approach broadly provides the ethnographer with doubts/faults concerning the study and allows them to answer them. Lastly, the impressionistic ethnographical approach recalls certain events uniquely with dramatic detail (like a well-told story). She also argues that there are several ethnographic conventions- discussing rapport, participant-observation, and the use of theory.
  • Content: Jennifer wanted to bring to the attention of anyone in the field of HCI the ethnographical approaches, conventions, and uses of each in modern-day society. She uses past examples, related work, and paradigm studies to provide examples for each.
Discussion:
This was a bit of a relatively unusual paper to read. I thought it was interesting and felt that a few of her points were definitely correct, but overall I feel like devoting an entire research paper to something along these lines is overkill. A simple "remember what you're doing out there- everything has a consequence. Everything!" will do. Like in my ethnography, I make a cautious effort to keep in mind how I am perceived, how I am gathering data, how I come across, etc. I got rather bored halfway through the paper because I felt her points were redundant or common sense. They were good to keep in mind for sure, but any GOOD ethnographer already knows to do/act certain ways that she addressed. There's no real "sure-fire" way to record if she achieved her goal or not because it is technically unclear to determine who all uses these proposed techniques or if they are actually referenced while in the field. I'm sure she believed she achieved her goals, though.

Wednesday, October 12, 2011

Paper Reading #18- Biofeedback game design: using direct and indirect physiological control to enhance game interaction

Title: Biofeedback game design: using direct and indirect physiological control to enhance game interaction
Reference Information:
Lennart Nacke, Michael Kalyn, Calvin Lough, and Regan Mandryk. " Biofeedback game design: using direct and indirect physiological control to enhance game interaction". CHI '11: Proceedings of the 2011 annual conference on Human factors in computing systems ACM New York, NY, USA ©2011.
Author Bios:
Lennart Nacke- He is an Assistant Professor for HCI and Game Science at the Faculty of Business and Information Technology at University of Ontario Institute of Technology (UOIT).
Michael Kalyn- A summer student working for Dr. Mandryk. He is a graduate in Computer Engineering and in his fourth year of Computer Science. His tasks this summer will be related to interfacing sensors and affective feedback.
Calvin Lough- Graduate student at the University of Saskatchewan.
Regan Mandryk- He's an Assistant Professor in the Department of Computer Science at the University of Saskatchewan. Obtained his B.Sc. in Mathematics from the University of Winnipeg in 1997, his M.Sc. in Kinesiology from Simon Fraser University in 2000, and his Ph.D. in Computing Science from Simon Fraser University in 2005.
Summary:
  • Hypothesis: If the authors can come up with either a direct or indirect method of physiological control sensors for users to control while immersed in a video game, then the user will enjoy the video game using these sensors much more than just with traditional controllers.
  • Methods: The authors created a game that featured many kinds of bonuses or power-ups that could be obtained if the user used their physiological inputs correctly (i.e. Medusa's Gaze to freeze enemies). The authors had some volunteers play through their game with randomly integrated physiological features to test how they were used.
  • Results: The results from the user studies and surveys indicated that the user's "fun" was directly associated with which game conditions were played under, that the users preferred the physiological sensors to play the game than without them, and that the kind of physiological conditions placed in the game had no effect on their "fun". Interestingly, however, users preferred some physiological controls over others. For example, one participant wasn't even sure that the EKG sensor controlled in the second game. Also, participants didn't like the fact that biting their lip was a control mechanism because it began to hurt after a few uses. Players did like the fact that flexing their leg allowed them to move faster or jump higher, however.
  • Content: Basically, the authors wanted to conduct a study based on a) if physiological controls/sensors made gameplay more interactive and "fun" and b) which sensors worked best. The authors found that the sensors did make the gameplay experience more enjoyable and that some sensors did work better than others. The sensors they found to be optimal were the ones that directly mapped the part of the body to gameplay (i.e. flexing your leg to move faster/jump higher) because it made the most conceptual sense to the player. The authors' surveys and questionnaires allowed them to have some insight into possibly designing a whole game solely based on physiological inputs.
Discussion:
I would have loved to have been a participant in this study. I think that some of these physiological sensors would definitely enhance gameplay (but I do agree with some of the participants that not all of the sensors were necessary. Some would definitely detract from the experience). This kind of study will more than likely breed the next generation of video games. I don't know how I feel about that. I like my games with the traditional controller and buttons and what not. The Kinect and WiiMote don't really interest me that much personally, but it is nonetheless an advancement in technology. I think that certain kinds of games could definitely have these kinds of sensors integrated phenomenally. In my opinion, the authors definitely achieved their goals because not only did they figure out if physiological sensors were "worth it", but they were accurately able to determine which kinds of sensors worked best for certain situations / gameplay environments.

Wednesday, October 5, 2011

Paper Reading #17- Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment

Title: Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment
Reference Information:
Andrew Raji, Animikh Ghosh, Santosh Kumar, and Mani Srivastava. "Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems ACM New York, NY, USA ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Andrew Raji- Works at the University of South Florida as a tenure-track Assistant Professor. He worked as a postdoctoral fellow from 2009-2010 and coordinated both the AutoSense and FieldStream projects.
Animikh Ghosh-  Joined SETLabs, Infosys after completing his M.S. in Comp Sc. from University of Memphis, Tennessee, USA. Now a research assistant under Dr. Santosh Kumar.
Santosh Kumar- I lead the Wireless Sensors and Mobile Ad Hoc Networks (WiSe MANet) Lab at the University of Memphis.
Mani Srivastava- Is a professor at UCLA in the engineering department. He received both the M.S. and Ph.D. degrees from the University of California, Berkeley, in 1987 and 1992, respectively.
Summary:
  • Hypothesis: If commercially-available mobile devices are going to continue to be worn, used, and distribtued, then their associated privacy leak concerns should be addressed immediately in order to prevent unintended information leaks about users through inferences of collected information from such technologies.
  • Methods: The authors studied groups of people to get feedback about privacy concerns.These groups of people used a technology called AutoSense. One group of people had no personal ties to data presented later on in the study while the other group was tested twice. They were tested after 3 days of using the device, and then again after all of the data was collected and presented back to them including inferences about each of them using the data from the technology to see if their concern level changed. The authors used 6 main points to focus their conclusions: measurements, behaviors, contexts, restrictions, abstractions, and privacy. Measurements are the raw data sets captured by a sensor. Behaviors are the inferred actions taken by the data producer with associated measurements. Context is any information that can be used to imply the situation of the behavior. Privacy threats are those that come with the data producer's identity being tied to the information. Restrictions remove data from a dataset before it can be shared to recude privacy threats. Abstractions provide a middle ground between complete restriction and complete sharing of data in order to render the device useful.
  • Results: The study found that users are most concerned with the release of information concerning conversation and commucing patterns. Obviously, the group with no personal stake in the data couldn't have a strong concern to the data from the second group of users using the AutoSense because they said that it didn't really bother them because it wasn't their own personal information. Upon data presentation sessnions, the authors also found (not surprisingly) that participants' concern levels rose as more and more information was released at a time. For example, if just timestamps of events or just events rather than a timestamp with an associated event all released together. This way, you couldn't really piece together information as easily as you could if the event and time were right there next to each other. The authors found that abstractions play a large role in determining the level of privacy threats.
  • Content: The authors conducted these studies to raise general awareness about privacy issues associated with general or specific-purpose mobile devices most people carry around today. They found that it is one issue to share personal information with someone you are directly talking to, but it is another issue entirely to share information across a wide network ("the web") so the general public may go access it.
Discussion:
This was an interesting read. It made me think about what all information I have released to the general public in the past and I will definitely be a little cautious when making decisions in the future. The authors had access toa LOT of information about the participants just because they were wearing a sensor around for 3 days. The inferences that can be made from the measurement collections are very strong as I have learned. These issues should be addressed more widespread when companies create such technologies, in my opinion. Based upon the results of their study, I believe the authors definitely achieved what they had set out to do initially. They are just trying to raise awareness about the things that the "leak" unintentionally to the "public", much like when you are visiting sites online and you "save your password" for a site.

Paper Reading #16- Classroom-based assistive technology: collective use of interactive visual schedules by students with autism

Title: Classroom-based assistive technology: collective use of interactive visual schedules by students with autism
Reference Information:
Meg Cramer, Sen Hirano, Monica Tentori, Michael Yeganyan, and Gillian Hayes. "Classroom-based assistive technology: collective use of interactive visual schedules by students with autism". CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems ACM New York, NY, USA ©2011. ISBN: 978-1-4503-0228-9.
Author Bios:
Meg Cramer- A graduate student at UC Irvine in the School of Information and Computer Sciences. Her advisor is Gillian Hayes.
Sen Hirano- He is a first-year PhD student in the department of  Informatics at the School of Information and Computer Science at the University of California, Irvine. His advisor is Dr. Gillian Hayes.
Monica Tentori- I am an assistant professor in computer science at UABC, Ensenada, México and a post-doctoral scholar at University of California, Irvine (UCI).
Michael Yeganyan-UC Irvine researcher for the Informatics STAR Research Group. Currently developing vSked, an assistive technology for children with autism, to be used in classrooms for child development.
Gillian Hayes- I am an Assistant Professor in Informatics in the School of Information and Computer Sciences and in the Department of Education at UC Irvine. My research interests are in human-computer interaction, ubiquitous computing,assistive and educational technologies and medical informatics.
Summary:
  • Hypothesis: If vSked is given a chance and becomes widely implemented, then it can promote student independence, reduce the quantity of educatorinitiated prompts, encourage consistency and predictability, reduce the time required to transition from one activity to another, and reduce the coordination required in the classroom.
  • Methods: The authors created a visual schedule on vSked for students using symbols or pictures to represent activities in the order in which the are scheduled for the day. For students with speaking impediments, communication boards allow those students to interact and participate in activities via voice or interaction with the functionality on the vSked device. A token-based reward system is also established on vSked to encourage students to stay on task, stay participating, and to stay involved. Each positive behavior displayed is recorded by vSked and tokens are added to the student's record. These tokens can add up to actual rewards that can be earned. The vSked system consists of a large touch screen monitor at the front of the class grouped with many miniature touch screen PCs for each of the students to interact with. All of the information within the system is connected. To accurately evaluate the effectiveness of vSked in the initial implementation stages, the authors conducted some experiments on sample sets of students.
  • Results: The results of the studies conducted by the authors on not only the students, but the teachers using the system as well, included a 2-week period before deployment as well as a 1-week period during deployment during the spring and summer semesters. The results showed that the students were more motivated to respond to questions and to progress to see what happens after each question is answered and also to pay attention to ask task, focusing on answering each specific question. It was also found that the students also preferred using the vSked because of its immediate rewarding of students with correct answers via firework graphics. There is no such capabilities with a simple pen-and-paper system. The authors also found that vSked allowed the students to basically "run their own day". The token-based system, each student's schedule, and trask transitions were now in the hands of vSked and of each individual student. The teachers didn't have to spend a lot of time bookkeeping these things and could focus more on other things that could be done. When rewards were earned, fireworks would display on the large screen in the front of the class so everyone could see. This would sometimes result in cheering from all the students (which is apparenlty rare among autistic students to be socially supportive/aware). Students are also now aware of how each other student is doing in the class. Often times, students were also found sharing tablets or looking at each other's tablets to see how a task was going.
  • Content: The authors of vSked wanted to create something to allow for a better, more enhanced learning environment for the specially educated children. So they had a vision to create a technology to integrate many individual features of a special education classroom and integrate them all. Through their implementation of a large TV screen centrally located in the classroom connected to many individual PCs for each student, it allows not only individuality and independence for each student to stay on top of things, but it also allows each student to see where they are relative to the whole class and to share experiences and skills with one another that was not previously seen prior to the vSked system.
Discussion:
I really liked this idea and how successful vSked was. Even though there were so many ways that were recorded in which vSked was successful, I believe there are still so many unexplored avenues in which vSked affects the students in social/psychological ways that were maybe unintended upon creation of the vSked (in positive ways). I believe the authors definitely met their goals and possibly even surpassed them when they conducted their studies and analyzed their feedback from trial classroom agents. I believe the authors were able to connect to the students more effectively with their technology then with traditional pen-and-paper systems. It seemed to intrigue the students more than usual for most activities. I like how the vSked is also allowing the students to develop their skills such as social interactions and maintaining their intelligence and independence, too.

Monday, October 3, 2011

Paper Reading #15-Madgets: actuating widgets on interactive tabletops

Title: Madgets: actuating widgets on interactive tabletops
Reference Information:
Malte Weiss, Florian Schwarz, Simon Jakubowski, and Jan Borchers. "Madgets: actuating widgets on interactive tabletops". UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology ACM New York, NY, USA ©2010. ISBN: 978-1-4503-0271-5.
Author Bios:
Malte Weiss- He is a PhD student at the Media Computing Group of RWTH Aachen University.
Florian Schwarz- Student assistant in the Media Computing Group of RWTH Aachen University.
Simon Jakubowski- Student assistant in the Media Computing Group of RWTH Aachen University.
Jan Borchers- Jan Borchers is full professor of computer science and head of the Media Computing Group at RWTH Aachen University. With his research group, he explores the field of human-computer interaction, with a particular interest in new post-desktop user interfaces for smart environments, ubiquitous computing, interactive exhibits, and time-based media such as audio and video.
Summary:
  • Hypothesis: If the authors's algorithm of recognition and manipulation can decompose the widgets into rigid bodies that can be actuated independently, then they will show that this actuation concept prepares the ground for a new class of actuated tabletop tangibles: magnetic widgets, or Madgets.
  • Methods: The authors's actuation algorithm triggers magnets attached to their design board to manipulate permanannt magnets attached to widgets for interactions. The different kinds of actuational directions that this algorithm deals with are tangential and normal (x-y and z, respectively). The movement algorithm takes into account "physical properties" of each interactive widget on the surface. Since their algorithm operates in real-time, constant update functions need to be called (refreshing and updating positions of widgets being interacted with, for instance). Also, the authors noticed that after about a minute, the magnets get hot individually. So if the temperature of a single magnet ever exceeds a certain temperature, it gets to "take a break" while another one is substituted in for it to do its work.
  • Results: While there were no studies conducted, trial runs, participant surveys, or anything of the sort, the authors pioneered ways to incorporate feedback mechanisms to ensure the proper use of the Madgets technology. Whenever a user drags an object, the Madgets board provides vibration feedback. Whenever a user turns a knob, switch, or adjusts a level for any widget, the Madget board provides resistance feedback (say if you tried to turn the volume past 100%). Whenever a user pushes a button down (seemingly), the area will provide vibration feedback.
  • Content: The authors set out to create a new technology by building on an existing one. They saw that interactive surfaces could be improved upon and so they took advantage of it. By providing details about the proper construction and implementation of their Madgets board, the authors showed what is possible for "upgraded widget technology".
Discussion:
I think this is kind of neat, but like with past papers dealing with similar issues regarding needing a specialized board for use, this can be cumbersome to construct/have around taking up space. I did like the unique interactions that this board allows you to make with the general purpose widgets, however. Also, the algorithms and thought processes going into the underlying algorithm into making the board work flawlessly was genius. I just would have never thought that magnets would overheat or markers would go out of bounds, etc...I give these guys some respect for that. Again, I don't see myself using this kind of technology ever, but there could be some fields where this could be used and expanded upon. The authors, in my opinion, definitely achieved their goals by creating this board and showcasing it to the public via their publication. I believe if they had tested it outside of their working group that they could have a more solid foundation for seeing if their product is satisfactory based on a user-satisfaction scale.

Paper Reading #14- TeslaTouch: electrovibration for touch surfaces

Title: TeslaTouch: electrovibration for touch surfaces
Reference Information:
Olivier Bau, Ivan Poupyrev, Ali Israr, and Chris Harrison. "TeslaTouch: electrovibration for touch surfaces". UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology ACM New York, NY, USA ©2010 ISBN: 978-1-4503-0271-5.
Author Bios:
Olivier Bau- He is currently a PostDoctoral Research Scientist at Disney Research in Pittsburgh in the Interaction Design group with Ivan Poupyrev. He received his PhD in Computer Science (HCI) at INRIA Saclay, working within the In|Situ| team with Wendy Mackay.
Ivan Poupyrev- He is currently a Senior Research Scientist at Disney Research Pittsburgh, a unit of Walt Disney Imagineering. There he directs an interaction technology group which focuses on inventing and exploring new interactive technologies for future entertainment and digital lifestyles
Ali Israr- He is currently working with the Interaction Design @ Disney group in Disney Research, The Walt Disney Company, USA.
Chris Harrison- He's a a fifth year Ph.D. student in the Human-Computer Interaction Institute at Carnegie Mellon University advised by Scott Hudson. He's also a Microsoft Research Ph.D. Fellow and editor-in-chief of XRDS, ACM's flagship magazine for students.
Summary:
  • Hypothesis: If the authors can integrate electrovibration into an interactive surface, then many features of tactible input will be unlocked for the user to experience with TeslaTouch.
  • Methods: In order to allow the transparent electrode to work, a basic circuit board is used to induce electricity for the interactions. No actual voltage is passed through the user (who is assumed to be grounded), and only a minimal current is passed (an amount considered safe for humans to be "charged" with). This TeslaTouch technology is just an "add-on" to current touchscreen / interactive devices already existing. The authors conducted experiments with trial users in order to obtain initial feedback results on the initial implementation on TeslaTouch.
  • Results: The studies conducted by the authors indicated that low frequency interactions were perceived as "rougher" relative to the higher frequency interactions. The effect of amplitude on the interactions depended on the frequency- the higher the frequency and amplitude simultaneously resulted in "smoother" interactions versus high amplitude, low frequency interactions being repotred as "stickier". The results were analyzed by the authors to determine the JNDs (just noticeable differences) between interactions and to determine what the borderline was between pleasurahle and discomforting levels of interactions.
  • Conents: The authors set out to create a new way of interacting on surfaces that would reap more benefits than traditional mechanical vibrotactile systems. They created "electrovibration" techniques that would differ based on the interaction type (dragging a file, erasing part of a picture, etc). After testing, the authors calculated what optimal levels of frequency and amplitude were appropriate for each interaction and also what appropriate levels of each were to obtain maximal pleasure from human touch with each interaction.
Discussion:
I like the idea of this technology, but I kind of got a little bored with the paper because this feels like the 5th paper in a row we had to report on multitouch surface devices. It is a cool idea, but I consider this technology an "add-on" to existing technologies if you want more options or different feedback mechanisms. What I got out of this paper was that TeslaTouch on an interactive surface is similar to playing Nintendo64 with a Rumble Pack. I believe the authors achieved their goals, but still (admittedly) had room to improve and could apply this technology to new applications for future projects. This is expandable for sure, and can definitely be used primarily instead of secondarily (from what I understood) on interactive surface devices in the future.