Reference Information:
Greg Little, Lydia Chilton, Max Goldman, and Robert Miller. "TurKit: human computation algorithms on mechanical turk". UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology ACM. New York, NY, USA. ©2010 ISBN: 978-1-4503-0271-5.
Author Bios:
Greg Little-
Lydia Chilton- a CS graduate student at the University of Washington. For the 2010 Academic year, she was an intern for Microsoft Research Asia in Beijing. From 2002 to 2009, she was at MIT where she did Economics '06, EECS '07 and an EECS M.Eng '09 advised by Rob Miller.
Max Goldman- Max is involved with Massachusetts Institute of Technology (MIT),
Department of Electrical Engineering and Computer Science (EECS), Computer Science and Artificial Intelligence Laboratory (CSAIL), and the User Interface Design (UID) Groups.
Robert Miller- Rob is an associate professor in the EECS department at MIT, and leader of the User Interface Design Group in the Computer Science and Artificial Intelligence Lab.
Summary:
- Hypothesis: "As we explore the growing potential of human computation, new algorithms and workflows will need to be created and tested. TurKit makes it easy to prototype new human computation algorithms."
- Methods: The authors considered a set of 20 TurKit experiments run over the past year including iterative writing, blurry text recognition, website clustering, brainstorming, and photo sorting. For iterative writing, a turker will write one paragraph outlining a specific goal. The process will show this paragraph to someone, asking them to improve upon it. The process can also have people vote on which paragraph is better between interations (i.e., if the "improved" paragraph should be kept or not). For blurry text recognition, this process will recognize text even when it is really unreadable. The algorithm works by having one turker make an initial guess as to what the words are, and then successive turkers will improve upon ciphering the words based on context, unique recognition, and previous guesses. For the other three areas of application (brainstorming, website clustering, and image sorting), a voting algorithm is used to prompt users which is "best".
- Results: The authors actually found TurKit to be more successful than they thought. By this, I mean they found several unintended side benefits including "between crash modification", "implementation friendly", and "retroactive print-line-debugging" benefits. Some issues with TurKit that the authors and some sample users found, however, include knowing when to wrap a function in a "once" call, knowing which parts of the process could be modified, and knowing how to use the parallel features of TurKit properly. The authors found that the "crash and rerun" implementation sacrifices efficiency for a programming usability profit. They also found that they can easily run out of space to run the process due to re-executing every step each time the program is "crashed". Overall, they found the first launch of TurKit to be successful.
- Conents: In this paper, the authors sought to show off their created work (which they named TurKit) to allow new human computational possibilities at a reasonable cost/efficiency. They tested it out in practical situations which may arise during common use of this device like blurry text, voting, polling, sorting, etc. After each experiment was conducted, results were gathered. The whole point of this creation was to allow a greater flexibility with human computational methods.
I thought this paper was really interesting. What interested me the most was their ability to poll humans and sort information based on essentially feedback while being able to disregard how long it takes to receive that feedback because of the "crash and rerun" style implementation. I believe the authors achieved the goals they initially set forth. The program seems to work wonderfully in the fields that were tested (namely blurry text, voting, polling, and iterative writing). This kind of software allows for easy collaboration of files, works, code, etc, between members of a team that might not all be in the same place at once. For example, if you wanted to submit an idea to a website for some product, you could send it over this TurKit and have other members of the team vote on it and then once everyone finally agrees on an idea/wording/etc, it can be posted up. This also allows great "general public" use. An example of this would be putting a poll on a site asking "which of these is your favorite?" There is definitely room to improve on this software as the authors have noted. The springboard possibilities for this kind of algorithm looks really good.
No comments:
Post a Comment