LSA Visits the Chinese Room

A guided tour by Gerry Stahl

Presented to Tom Landauer's seminar on LSA, Spring 1997

One way of responding to the question, does LSA do the same thing that people do, is to adapt the answer that the prominent American philosopher John Searle gave to the question of the relation of minds, brains and programs. Searle was responding to the claim of "strong AI" as articulated by Newell and Simon (1963) that minds are programs executing on brain hardware. Searle’s controversial response is in Searle (1980) "Minds, Brains and Programs", Behavioral and Brain Sciences, 3:417-424. It is reprinted in Readings in Cognitive Science and elsewhere.

Searle’s argument centers on the difficult concept of "intentionality". For Searle, "intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not." Using this concept we might postulate that when a person expresses a belief they have an intentional content in mind that is nowhere present or even represented in LSA. This intentional content is the additional ingredient that we intuitively sense is missing from a definition of meaning restricted to the interconnections of linguistic tokens as captured by LSA. Searle tries to make this intuitive sense very graphic with his Chinese room scenario. I will try to adapt it to the LSA question. I think that even without fully understanding intentionality we will see that LSA does not understand in the sense that people do.

Let us distinguish "strong LSA" from "weak LSA." According to weak LSA, the principal value of the computer in the study of the mind is that it gives us a powerful tool: e.g., to formulate and test hypotheses in a precise fashion. But according to the strong interpretation of LSA as a cognitive theory, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right LSA programs can be literally said to understand and have other cognitive states. In strong LSA, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.

The aim of an LSA program is to simulate the human ability to understand texts. It is characteristic of human beings’ text-understanding capacity that they can answer questions about the text even though the information that they give was never explicitly stated in the text. When the computer is asked questions it will print out answers of the sort that we would expect human beings to give. Partisans of strong LSA claim that in doing this the computer is not only simulating a human ability but also:

  1. That the computer can literally be said to understand the text and provide the answers, and
  2. That what the computer and its program do explains the human ability to understand the story and answer questions about it.

One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to LSA with the following Gadankenexperiment. Suppose that I am locked in a Chinese room with large matrices of numbers and instructions for following the LSA algorithm. Occasionally, I receive a string of numbers. Following the LSA instructions, I count how many instances there are of each number in the input string. I use each distinct number as an index into a matrix to retrieve a vector of 300 decimal numbers; multiply each of the 300 decimals by certain other numbers I look up and then add all the resultant vectors together. I use the resultant vector to perform a calculation with each vector in a second matrix, choosing the index to the vector that led to the highest computation result. This index is used to select the string of numbers to return out of my room.

Unknown to me, researchers outside my room have taken normal English sentences expressing questions and encoded them in a string of numbers. When I return a new string of numbers, these people decode it into an English sentence. When the researchers outside my room compare the English of my responses to the English responses of a control group of people who simply respond naturally to the sentences, they find that mine show just as high a level of understanding as the others, within the limits of experimental error and inter-rater reliability. They conclude that I have understood the text in the same way as other people and that my processing (which can be observed as the manipulation of symbols) must explain how other people (whose neural processing cannot be observed) understand the same texts.

But in fact, I have not understood a word of either the input or the output sentences. If they were about the heart and blood, I had no idea of that but merely manipulated formal symbols. I may have inputs and outputs that are indistinguishable from a person responding to the English sentences, but I understood nothing. The LSA program cannot explain human understanding since when I am running the program I understand nothing.

Well, then, what is it that people have when they answer English sentences that I did not have when I processed the LSA rules? The obvious answer is that the people know what the sentences mean while I haven’t the faintest idea what the numbers I am manipulating mean. So LSA does not contribute to a theory of meaning (semantics).

Now, you may argue that English words are rather arbitrary symbols just like the numbers that encode them are and that the computer understands these numbers because it has been trained to understand them on a large corpus of text encoded in these numbers just as people have learned English words from being trained on a large corpus of words. However, note that I was able to manipulate the LSA symbols without any understanding based on training: I simply looked up indices and carried out computations on numbers that had nothing to do with any content however expressed. As in all AI programs, the attribution of meaning to manipulated symbols is projected by programmers and other people interpreting the meaningless shifting of arbitrary symbols; the same goes for attributions of training, learning and understanding.

So is Searle a dualist? Au contraire! He believes that only a brain (or some other physical object with similar abilities to cause intentionality) can have a mind. It is the people who think that mind is a program that can be dissociated from the physical computer on which it runs who are the dualists. For instance, someone who argued that understanding could be derived purely from an analysis of corpora of text and computational algorithms (all non-material entities) would be in danger of hypothesizing a mental realm of mind that is independent of (rather than emergent from) the physical world of brains and bodies and interactions with the physical world.

Go to top of this page

Return to Gerry Stahl's Home Page

Send email to Gerry.Stahl@drexel.edu

This page last modified on January 05, 2004