Tuesday, August 30, 2011

On Computers

Title: "Minds, Brains, and Programs"
Reference Information: Searle, John. R. (1980) "Minds, brains, and programs". Behavioral and Brain Sciences 3 (3): 417-457
Author Bio: Professor Searle is currently a professor at UC Berkeley teaching the Philosophy of Language and the Philosophy of Science for the fall semester of 2011. He has been a professor there for over 50 years and has published many meaningful papers during his stay there including the publication currently being discussed. He has traveled all over the world giving special lectures.   He attended the University of Wisconsin as well as Oxford University, where he received his Bachelor's, Master's, and Doctorate degrees in Philosophy.
Summary:

  • Hypothesis: A computer program does not contain the necessary properties for "understanding" (namely, intentionality).
  • Methods: Via his experiments, Dr. Searle simulates a computer program by placing himself inside of a room in which he receives and replies to various Chinese symbol inputs. Knowing absolutely zero Chinese himself, Dr. Searle is able to accurately represent a CPU inside of a computer. He is given sets of rules about how to manipulate the Chinese symbols (the "computer program") as well. He is also provided with filing cabinets to serve as "memory".
  • Results: Dr. Searle is able to properly construct an output that corresponds to an appropriate response to any input.
  • Contents: Dr. Searle sets up a room to emulate a computer system. Chinese symbols will act as the inputs and ouptuts to the system. These symbols will be inputted and outputted via a slot in the door. To sum up the experiment, Dr. Searle showed that proper outputs can be achieved without actual knowledge of the data set.
Discussion: I believe this was a decent read. I totally agree with Searle on this issue, though. No matter how "smart" a computer program might appear to be or how much a user might think a program "understands" them/their input, a computer will ALWAYS be programmed by a human being. By this, I mean that another human being will always encode a set of instructions about how to act once it receives certain input. The computer can receive all of the input in the world, but it will NEVER truly "understand" the input- it will just react to it and jump to the appropriate action for it. Via this methodology, I believe that Searle definitely achieved what he had set out to prove from the beginning- that program's cannot achieve "Strong AI"; they do not possess a consciousness or intentionality. I really liked how he broke down how each part of a computer system was emulated by his experiment. For example, the article references Searle as the actual CPU because both a CPU and Dr. Searle have no idea what their inputs are (or mean for any case), but they both know how to act accordingly once they receive the inputs.

No comments:

Post a Comment