abonnement Unibet Coolblue Bitvavo
  donderdag 17 mei 2007 @ 00:49:00 #51
10526 broer
Nutteloze van de nacht.
pi_49467573
wyccie +1

Punten in deel 765
Erkel 1
ScottTracy 1
slaveloos 1
broer 1
wyccie 1
pi_49467761
quote:
De -test is een test bedacht door Alan in zijn proefschrift Computing Machinery and Intelligence uit 1950 om proefondervindelijk vast te stellen of een machine menselijke intelligentie vertoont.

De test komt erop neer dat als een computer iemand voor de gek kan houden en deze kan laten geloven dat hij een mens is (dus hetzelfde gedrag vertoont als een mens), de computer intelligent moet zijn. Voor zo'n test moeten dan de omstandigheden zodanig worden gemaakt dat de proefpersoon niet ziet met wie hij praat, bv. door via een toetsenbord met iemand in een andere kamer te converseren (het woord chatten bestond in deze betekenis in 's tijd nog niet).

Wat is
  donderdag 17 mei 2007 @ 00:58:10 #53
142210 slaveloos
laveloos tot de deurmat
pi_49467843
alan turing, the turing test
Morality is doing right no matter what you are told, relgion is doing what you are told no matter what is right
all I wanted was a pepsi, just one pepsi and shit was given to me
pi_49467876
quote:
Op donderdag 17 mei 2007 00:58 schreef slaveloos het volgende:
alan turing, the turing test
Slaveloos +1


Punten in deel 765
Erkel 1
ScottTracy 1
slaveloos 2
broer 1
wyccie 1
  donderdag 17 mei 2007 @ 01:11:25 #55
152047 Yurithemaster
Winter is coming
  donderdag 17 mei 2007 @ 01:12:56 #56
142210 slaveloos
laveloos tot de deurmat
pi_49468315
quote:
The Chinese Room argument is a thought experiment designed by John The Chinese Room argument is a thought experiment designed by John (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
The Chinese Room argument is a thought experiment designed by John (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
(1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).

laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
wie is
Morality is doing right no matter what you are told, relgion is doing what you are told no matter what is right
all I wanted was a pepsi, just one pepsi and shit was given to me
  donderdag 17 mei 2007 @ 01:13:58 #57
152047 Yurithemaster
Winter is coming
  donderdag 17 mei 2007 @ 01:14:11 #58
75556 erkel
back from Dagestan
pi_49468353
Damn, geen zin in om dat allemaal te lezen. Ik ga slapen
Weisst
  donderdag 17 mei 2007 @ 01:16:48 #59
142210 slaveloos
laveloos tot de deurmat
pi_49468442
korter: wie verzon het chinese roonm argument tegen sterkr AI?
Morality is doing right no matter what you are told, relgion is doing what you are told no matter what is right
all I wanted was a pepsi, just one pepsi and shit was given to me
  donderdag 17 mei 2007 @ 01:18:58 #60
152047 Yurithemaster
Winter is coming
pi_49468548
Het staat er gewoon drie keer.
  donderdag 17 mei 2007 @ 01:21:05 #62
152047 Yurithemaster
Winter is coming
  donderdag 17 mei 2007 @ 01:22:59 #63
142210 slaveloos
laveloos tot de deurmat
pi_49468648
quote:
Op donderdag 17 mei 2007 01:20 schreef wyccie het volgende:
Het staat er gewoon drie keer.
sorry, find and replace werkt blijkbaar niet zo goed
Morality is doing right no matter what you are told, relgion is doing what you are told no matter what is right
all I wanted was a pepsi, just one pepsi and shit was given to me
  donderdag 17 mei 2007 @ 01:23:24 #64
142210 slaveloos
laveloos tot de deurmat
pi_49468660
laat maar dus....
Morality is doing right no matter what you are told, relgion is doing what you are told no matter what is right
all I wanted was a pepsi, just one pepsi and shit was given to me
pi_49468706
quote:
Op donderdag 17 mei 2007 01:21 schreef Yurithemaster het volgende:

Dat zag ik ook net
Oh, het antwoord dus ook. Ik bedoelde gewoon de tekst.
  donderdag 17 mei 2007 @ 01:29:22 #66
152047 Yurithemaster
Winter is coming
pi_49468860
quote:
Op donderdag 17 mei 2007 01:24 schreef wyccie het volgende:

[..]

Oh, het antwoord dus ook. Ik bedoelde gewoon de tekst.
Het antwoord staat er ook tussen. is het misschoien beter als we een nieuw evraga nemem aangezien niemand van de aanwezig het ook gaat raden
pi_49468935
Maar verder valt het helemaal niet op dat je dronken bent, slaveloos.
Don't cry because it's over. Smile because it happened.
Liefde is een bitch plus een macht der gewoonte
pi_49468994
quote:
Op donderdag 17 mei 2007 01:29 schreef Yurithemaster het volgende:

Het antwoord staat er ook tussen. is het misschoien beter als we een nieuw evraga nemem aangezien niemand van de aanwezig het ook gaat raden
Ik denk dat Yuri ook wat gedronken heeft.
pi_49469105
Ik ben ook niet erg wakker meer, welterusten
  donderdag 17 mei 2007 @ 01:55:26 #70
152047 Yurithemaster
Winter is coming
pi_49469474
quote:
Op donderdag 17 mei 2007 01:33 schreef wyccie het volgende:

[..]

Ik denk dat Yuri ook wat gedronken heeft.
jemig wat erg zeg

En ik heb niet eens iets gedronken
  donderdag 17 mei 2007 @ 01:58:49 #71
10526 broer
Nutteloze van de nacht.
pi_49469525
Had je wel moeten doen, Yuri. Het is vrijdag. Soort van.
  donderdag 17 mei 2007 @ 01:59:57 #72
152047 Yurithemaster
Winter is coming
pi_49469547
quote:
Op donderdag 17 mei 2007 01:58 schreef broer het volgende:
Had je wel moeten doen, Yuri. Het is vrijdag. Soort van.
Kan nu nog wel even kijken of er nog wat te drinken valt te halen
  donderdag 17 mei 2007 @ 02:06:45 #73
10526 broer
Nutteloze van de nacht.
pi_49469653
Een nieuwe vraag zit er niet meer in, zeker?

Misschien ook maar beter. Dan kan ik me geestelijk voorbereiden op de slaap.
pi_49471115
Hier zet ik niets neer!
pi_49471177
Now , asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of


Antwoord : J.Searle....
Hier zet ik niets neer!
abonnement Unibet Coolblue Bitvavo
Forum Opties
Forumhop:
Hop naar:
(afkorting, bv 'KLB')