Sunday, November 13, 2011

Paper Reading #32- Taking advice from intelligent systems: the double-edged sword of explanations

Title:  Taking advice from intelligent systems: the double-edged sword of explanations.
Reference Information:
Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, and Daniel Gruen, "Taking advice from intelligent systems: the double-edged sword of explanations". IUI '11 Proceedings of the 16th international conference on Intelligent user interfaces. ACM New York, NY, USA. ©2011. ISBN: 978-1-4503-0419-1.
Author Bios:
Kate Ehlrich- Kate is Senior Technical Staff Member in the Collaborative User Experience group at IBM Research where she uses Social Network Analysis as a research and consulting tool to gain insights into patterns of collaboration in distributed teams.
Susanna Kirk- M.S., Human Factors in Information Design. Graduate Certificate, Business Analytics. Coursework in user-centered design, prototyping, user research, advanced statistics, data mining, data management and data visualization..
John Patterson- John Patterson is a Distinguished Engineer (DE) in the Collaborative User Experience Research Group.
Jamie Rasmussen- Jamie Rasmussen joined the Collaborative User Experience group in March 2007. He is working with John Patterson as part of a team that is exploring the notion of Collaborative Reasoning, the work a group of people does to collect, organize, and reason about information.
Steven Ross- Steve is presently working in the area of Collaborative Reasoning, using semantic technology to help individuals within an organization to think together more effectively and to enable them to discover and benefit from existing knowledge within the organization in order to avoid duplication of effort and permit better decisions to be made in a more timely fashion.
Daniel Gruen- Dan is currently working on the Unified Activity Management project, which is exploring new ways to combine informal work with business processes, recasting collaboration technologies in terms of the meaningful business activities in which people are engaged.
Summary:
  • Hypothesis: If the authors can investigate intelligent systems and its justifications, then maybe the accuracy of these systems will increase and users will not be "led astray" as much as they are being currently.
  • Methods: The authors decided to conduct a study on the effects of a user's response to a recommendation made by an intelligent system as well as the correctness of the recommendation. In this case, it was conducted on analysts engaged in network monitoring. The authors used a software called NIMBLE to help collect data for this study. 
  • Results: The users performed slightly better with a correct recommendation than without one. Results indicated that justifications grant benefits to users when a correct response is available. When there is no correct response available, neither suggestions nor justifications made a difference in performance. Most of the analysts seemed to discard the recommendations anyway, relying on their own inclinations. In the separate study concerning analyzing users' reactions, it was found that users typically follow the recommendations given and that the influence between the recommendation and the user's action is high. 
  • Content: The authors wanted to create a study to test the accuracy of recommendations and the relationship between that and the influence of those recommendations on users' actions. NIMBLE allowed the authors to do that, and they were accurately able to capture a system's interactions between users. By continually comparing results of studies with baseline values, the authors were able to figure out the benefit of providing correct recommendations versus the risk of negative actions associated with incorrect actions.
Discussion:
I am a little skeptical about this study. It definitely has good intentions and visions for future work, but there were too many variables that go into studies like these. The analysts could have just used their own judgement the entire time and completely discarded each recommendation, a user could have been completely biased by a an option on the screen, etc. I didn't understand a lot of some of the ways in which the authors went about interpreting their results. For example, they "divided" people into 3 fields: low, medium, and high? I wasn't sure what some of that jargon meant or what criteria was used to divide the users up. This kind of technology seems a bit limited for general purpose or release. The authors seemed happy about the findings and results of their studies, so I suppose that the authors achieved their goals. I'm not too sure, honestly.

No comments:

Post a Comment