The Chinese Room argument is a thought experiment designed by John
![]()
The Chinese Room argument is a thought experiment designed by John
![]()
(1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).
![]()
laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what
![]()
called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think.
![]()
argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:
Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now,
![]()
asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of
![]()
goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
The Chinese Room argument is a thought experiment designed by John
![]()
(1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).
![]()
laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:
Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now,
![]()
asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
(1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).
![]()
laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think.
![]()
argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:
Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now,
![]()
asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate.
![]()
notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.