Talk:Chinese room/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3 Archive 4 Archive 5

Teacher look! Johnny is cheating on the (Turing) test!

I always thought that Turing anticipated many arguments against AI including Searle-like arguments when he created the test. The questioner may ask anything to determine whether the box is thinking, understanding, has Buddha nature or whatever else they feel separated human thought from machine's. One rule: no peeking.

There's the rub. Searle looked inside and says "Hey wait a minute! Clearly nothing is understanding Chinese because there are only syntactic rules etc. So there is no understanding and therefore there can't be strong AI".

But he misses the point of the Turing Test. He doesn't get to look inside to detemine if there is "understanding", he must determine it from the outside.

Can Searle determine, from outside the box, that the machine is not really "understanding" Chinese? By the description of the test, he cannot. So, the Turinig Test has been passed and the machine is "thinking". The difference between syntax and semantics, the necessity of some component within the box to "understand" or for there to be important distinctions between strong and weak AI are either not well defined or red herrings.

Does this make sense or am I missing something?

Gwilson 15:53, 21 October 2007 (UTC)

You're right, Turing did anticipate Searle's argument. He called it "the argument from consciousness". He didn't answer it, he just dismissed it as being tangential to his main question "can machines think?" He wrote: "I do not wish to give give the impression that I think there is no mystery about consciousness ... [b]ut I do not think these mysteries necessarily need to be solved before we can answer the question [of whether machines can think]." (see philosophy of artificial intelligence). When it comes to consciousness, Turing recommended we follow a "polite convention" that if it acts like it thinks, we'll just go ahead say "it thinks". But he's aware this is only a convention and that he hasn't solved the hard problem of consciousness.
Turing never intended for his test to be able determine if a machine has consciousness or mental states like "understanding". Part of Searle's point here is to show that it fails to do so. Turing was too smart to fall into this trap. Searle's real targets aren't careful thinkers like Turing, but others, who are given to loose talk like "machines with minds" (John Haugeland) or who think the computational theory of mind solves the hard problem of consciousness, like Jerry Fodor, Steven Pinker or Daniel Dennett. ---- CharlesGillingham 17:03, 22 October 2007 (UTC)
It's a shame if Charles's statement above can't be sourced anywhere, because it is eminently interesting and really well suited to a spot on the article (if it can be sourced somehow). If not, good job Charles on this interesting bit of thought.
Et Amiti Gel (talk) 07:39, 14 January 2008 (UTC)
Turing's answer to the "argument from consciousness" is in his famous 1950 paper Computing Machinery and Intelligence. In the last chapter of Norvig & Russell's standard AI textbook, they equate Searle's argument with the one that Turing answers. Turing's reply is a version of the "Other Minds Reply", which is mentioned in this article. ---- CharlesGillingham (talk) 18:52, 15 January 2008 (UTC)

Rewrite of replies

I have rewritten the "replies" section. I wanted to include a number of very strong and interesting proposals (such as Churchland's luminous room and Dennett's argument from natural selection) that weren't in the previous version. I also wanted to organize the replies (as Cole 2004 does) by what they do and don't prove.

I am sorry to have deleted the previous versions of the System and Robot replies. These were quite well written and very clear. I tried to preserve as much of the text as I could. All of the points that were made are included in the new version. (Daniel Dennett's perspectives are amply represented, for example). However, because there were so many points to be made, some of this text had to be lost. My apologies to the authors of those sections. ---- CharlesGillingham 14:52, 9 November 2007 (UTC)

searle fall into a trap

He set himself up a trap and fell into it. If the computer can pass the turing test, than it is irrelevant whether or not it "understands" Chinese. In order for it to be able to respond in a human manner, it would have to be able to simulate conversation. The answers have to come from somewhere, regardless of the language if they are to seem natural. The thing is, Searle doesn't seem to realize that his argument is essentially equivalent to the normal definition of a turing test. The human in his experiment is a manual turing machine simulator. He basically tries to deny that a turing machine can do something, but posits it as a premise in his argument. He presupposes his conclusion that a computer has no mind, and then uses an argument that has nothing to do with this conclusion at all. To sum up his argument: A computer can be built that easily passes a turing test. A human can compute this program by hand. Therefore Computers are stupid and smell bad. The only thing that the argument proves is that the human brain is at least turing complete; I think everyone already knew that Mr. Searle.--66.153.117.118 (talk) 20:44, 25 November 2007 (UTC)

This is encyuclopedially irrelevant, and misses Searle' point that a TT passer can lack genuine semantics.1Z (talk) 10:35, 26 November 2007 (UTC)
I'm glad that you seem to have gotten the point that the Chinese room is a universal Turing machine, and so anything a computer can do, the Chinese room can do. If a "mind" can "emerge" from any digital machine, of any architecture, it can "emerge" from the Chinese room. That's not Searle's main point (as 1Z points out), but it's essential to the argument. Searle's main point takes the form of an intuition: he can not imagine that a mind (with "genuine semantics") could emerge from such a simple set up. Of course, a lot of the rest of us can. The power of the argument is the stark contrast between Searle's understanding and the room's understanding, and the way it forces AI believers to "put up or shut up". Searle is saying "there's a another mind in the Chinese room? I don't think so. Why don't you prove it!" And of course, at the end of the day, we really can't. We can only make it seem more plausible. But we thought it was plausible to begin with, and nothing will convince Searle. ---- CharlesGillingham (talk) 21:01, 26 November 2007 (UTC)
The irony is that the Turing test is also a "put up or shut up" test. I imagine Turing would have said to Searle "if you think there is some difference in causality or understanding (or whatever ill-defined concept you posit is important) between the artificial and the human "mind". Prove it. Show that you can determine, using the Test which is which". Since the Test is passed in the Chinese room Argument we should conclude that "causality", "understanding" or "mind" are really just philosophical mumbo-jumbo and have nothing to do with the issue. I think Searle's "success" is that he sucked everyone trying to "find the mind" in the CR. (A task equally impossible as "finding the mind" in a living breathing human) The response should have been "Show me yours and then I'll show you mine". Gwilson 14:51, 30 November 2007 (UTC)
I think I am with Gwilson here. If the CR really does communicate in Chinese, and we accept that there is no "mind" or "understanding" in there, then it follows that a person who communicates in Chinese does not really require a mind or understanding either - whatever they mean.Myrvin (talk) 20:34, 6 April 2009 (UTC)
I wonder if the CR is really a Universal Turing Machine. I've always wondered what happens if the CR is asked "What is the time?". It would seem that the man in the room would have to understand the question in order to look at his watch. The CR could give an avoidance reply (I don't have a watch) but, if this happens often enough, I would become very suspicious. Similar difficult questions could include "How many question marks are there in this question???"; and "What is the third Chinese character in this question?" I can't help feeling that there are whole classes of questions that the CR could not answer, but a person or even a computer could. Myrvin (talk) 10:38, 30 March 2009 (UTC)
"What time is it?" is a tricky example, because Searle is billions of times slower than (we hope) a computer would be. First he would match up the new characters one at a time to the books and charts in the room, and this would give him the number of a file cabinet (one of the millions of file cabinets in the warehouse space of his room). He would open the drawer and pull out his next instruction, which would tell him to copy something he can't read and ask him to go to another file cabinet. He would putter around like this for awhile, and eventually he would find and instruction that said, "If the hour on the clock is ten then write a big X on the 46th piece of paper in the stack on your cart and goto file cabinet 34,599, drawer 2, file 168." Eventually all this puttering would lead to him pushing several dozen chinese characters under the door that would translate to, "I think it was around ten when you asked me. I don't have a watch obviously, because I'm actually just a disembodied mind without a body. You didn't know that?" Searle might have been puttering around for hours, or even years, before he put these characters under the door.
Searle, sitting in the room, has all the essential features of a UTM. He has paper, pencils and a program. This is all you need. Alan Turing showed that anything which uses just paper and pencils can simulate any conceivable process of computation (see Church-Turing thesis). Therefor, if any computer can do it, Searle can do it (given enough time and paper).
If there is some intelligent behavior that Searle can't do then there is a serious problem, not with Searle's argument, but with "strong AI". Strong AI claims that a machine can have mind, which implies that a machine can simulate any intelligent behaviour. (This weaker claim is Searle's "hypothetical premise", which he grants at the top of the argument.) Church-Turing implies that Searle can simulate anything the machine can do. If it turns out that there is some intelligent behaviour that Searle can't simulate, then, in a simple proof by contradiction, a machine can't simulate all intelligent behaviors, and strong AI is mistaken (about the weaker claim, not to mention the stronger claim). Do you see how this works? "A machine can simulate intelligence" implies that "the chinese room can simulate intelligence". If the room can't simulate intelligence, then no other machine can either. ---- CharlesGillingham (talk) 05:37, 6 April 2009 (UTC)
I think some of S's mind snuck into that explanation, because there is a place where S looks at the clock. The instructions might as well say "When you see these characters, look at the clock and write down the time. Then look up your time characters in some filing cabinet, write down what they match with there, and pass that out."
I think that the filing cabinets must be updated continuously for the room to work well. Otherwise any questions about events that happened after the CR was built would be difficult to answer. If this were so, then part of that updating could be a change to the matching characters for the "What is the time?" question. In effect, the filing cabinets would have a clock, unbeknown to the operator. I don't think this works for "How many question marks are there in this question???"Myrvin (talk) 20:34, 6 April 2009 (UTC)
Searle himself is updating the file cabinets, with precisely the same unreadable symbols that the computer program used to update its memory. Remember that Searle is simulating a computer program that, we assume, was capable of fully intelligent behavior. If he can't, or if the program can't exist, then the thought experiment is sort of pointless.
The original computer program could only answer the question about time if it had access to some kind of clock. To tell you the truth, I don't know how operating systems actually access their clocks. I assume it's some kind of special assembler statement. For the thought experiment to make sense, we have to assume that Searle has access to the same resources as the original program. In my example, I substituted "look at the clock" for whatever special hardware call a program needs to get at the host computer's clock. ---- CharlesGillingham (talk) 21:10, 6 April 2009 (UTC)
As I remember it, most computers keep their own clock by updating a field every millisecond or less. A Super Searle might do this by being instructed to change an entry very frequently. Again using symbols he doesn't understand. However, the new information for updating the filing cabinets must come from the outside. S could receive cryptic symbols and follow instructions as to what to do with them. I am happy now with the "What is the time?" question, but am still iffy about the "How many question marks???" one.Myrvin (talk) 19:43, 7 April 2009 (UTC)

The Refactoring

The "blockhead" map which turns a simulation into a lookup table ( or "refactors" or whatever) requires bounded size input--- if the input can be arbitrarily long, you cannot refactor it as written. However, it is easy to get around a size limitation by doing this

"I was wondering, Mr. Putative Program, if you could comment on Shakespeare's monologue in Hamlet (to be continued)"

"Go on"

"Where hamlet says ..."

But then there's a "goto X" at each reply step, which effectively stores the information received in each chunk of data in the quantity X. If the chunks are of size N characters, The refactored program has to be immensely long, so that the jumps can go to 256^N different states at each reply, and that length must be multiplied by the number of mental states, which is enormous. So again, the argument is intentionally perversely misleading. The length of the program is so enormous, the mental state is entirely encoded in the "instruction pointer" of the computer, which tells you what line of code the program is executing. There is so much code, that this pointer is of size equal to the number of bits in a human mind.Likebox (talk) 19:47, 5 February 2008 (UTC)

Your analysis of the blockhead argument is absolutely correct. Computationalism and strong AI assume that "mental states" can be represented as symbols, which in turn can be coded as extremely large numbers (represented as X in this example). "Thinking" or "conscious awareness" can be represented as a dynamic process of applying a function recursively to a very large number. Refactoring this function into a goto-table is of course possible, and requires the exponential expansion of memory that you calculated.
However, since this is only a thought experiment, the fact that no such table could ever be constructed is irrelevant. The blockhead example just drives the point home that we are talking about "lifeless" numbers here. The details of the size of the program are not really the issue—the issue is whether the mind, the self, consciousness can be encoded as numbers at all. Our intuitions about "mind" and "self" tend to slip away when faced with the utter coldness of numbers. The failure of our intuitions has to do with our inability to see that extremely large numbers are as complex and interesting as the human spirit itself. ---- CharlesGillingham (talk) 19:23, 7 February 2008 (UTC)
I'm not sure that this fact (the table could not be constructed) is irrelevant. If the table cannot be constructed then it cannot be used as an argument in support of Searle's intuition. I might suggest that a Turing machine could not encode such a table even with an infinite tape because the number of entries in the table might be uncountably infinite (ie infinite number of entries in an infinite number of combinations).
I wanted to bring forth another point and this seems as good a place as any. What provision does Searle or the refactor algorithm make for words/characters which aren't in the lexicon, but which still make sense to those who "understand" the language? For one example, we've probably all seen the puzzles where one has to come up with the common phrase from entries like: |r|e|a|d|i|n|g| or cor|stuck|ner (reading between the lines and stuck in a corner) and we can decipher smilies like ;-> and :-O. To refactor one must anticipate an infinite number of seemingly garbage/nonsense entries as well those which are "in the dictionary". How would Searle process such a string of characters or even a string chinese characters one of which was deliberately listed upside down or on it's side? Gwilson (talk) 19:48, 21 February 2008 (UTC)
Well, such a table could be constructed in theory (by an alien race living in a much larger universe with god-like powers). I meant only that building such a table is impractical for human beings living on earth. (My rough upper bound on the table length is 2^(10^15) -- one entry for each possible configuration of the memory of a computer with human level intelligence.)
The size of the program or the complexity of its code are not the issue. The issue is whether a program, of any size or complexity, can actually have mind. A convincing refutation of Searle should apply to either program—the ludicrous simple but astronomically large "blockhead" version or the ludicrously complex but reasonably sized neuron-by-neuron brain simulation. You haven't refuted Searle until you prove that both cases could, in theory, have a mind.
On the issue of infinities. It doesn't really affect the argument significantly to assume that the machine's memory has some upper bound, or that the input comes in packets (as Likebox proposes above). In the real world, all computers have limits to the amount of input they can accept or memory they can hold, so we can safely assume that our "Chinese speaking program" operates within some limits when it's running on it's regular hardware. This implies that, for example, a Turing Machine implementation would only require a finite (but possibly very large) amount of tape and would have a finite number of states. Searle's argument (that he "still understands nothing") applies in this case just as easily as to a case with no limit on the memory, so the issue of infinities really does nothing to knock down Searle.
The answer to your second question is that, if the program can successfully pass the Turing test, then it should react to all those weird inputs exactly like a Chinese speaker would. Searle (in the room) is simply following the program, and the program should tell him what to do in these cases. Note that Searle's argument still works if he is only looking at a digitized representation of his input, i.e. he is only seeing cards that say "1" or "0". Searle "still understands nothing" which is all he thinks he needs to prove his point.
(And here is my usual disclaimer that just because I am defending Searle, it doesn't mean that I agree with Searle.) ---- CharlesGillingham (talk) 01:06, 22 February 2008 (UTC)
What I was hoping to show was that if you could drive the table size to infinity then the algorithm could not be guaranteed to terminate and hence would not be guaranteed to pass the TT. The Blockhead argument only works if the program can pass TT since everyone agrees that a program that fails TT does not have mind. I realize that this is pointless because Blockhead is really just copying it's answers from the mind of a living human. Given two human interlocutors (A and B), one could easily program a pseudo-Blockhead program which will pass TT. The pseudo-Blockhead takes A's input and presents it to B. It copies B response and presents it back to A and so on. Provided A and B are unaware of each other they will consider pseudo-Blockhead to have passed TT. The only difference between Blockhead and pseudo-Blockhead is that Blockhead asks B beforehand what his answer will be for ever possible conversation with A. At the end of the day though, Blockhead is using the mind of B to the answer A, the same as pseudo-Blockhead.
So, if Searle asks us to "Show me the mind" in Blockhead or pseudo-Blockhead, it's easy. It is B's mind which came up with the answers. I'm hoping this means that both Blockhead and pseudo-Blockhead do nothing to support Searle since they are in fact merely displaying the product of a mind.
Getting back to the smilies and such. I recall back around the time that Searle was writing his paper one of the popular uses of computers was to produce "ASCII ART". Pictures, some basic stick figures and others huge and detailed, printed on line printers or terminals using Ascii keyboard characters. These are instantly recognizable to anyone with "mind" however, they do not follow any rules of syntax. In essence, they are all semantic and no syntax. Can Searle's argument, that the program is merely manipulating "symbols" according to syntactical rules without "understanding", apply when the input has no symbols and no syntax? Having those things in the input is, I think, somewhat crucial to Searle's argument. However, I find that Searle's argument is slippery, when cornered on one front his argument seems to change. Does the CR not have mind because it processes only syntax without semantic or because computers don't have causality? Are the symbols the 1's and 0's of the computer or the symbols of the language? I don't know. Gwilson (talk) 21:24, 23 February 2008 (UTC)
Well, no, the ASCII pictures are still made of symbols, and he's still manipulating them according to the syntactic rules in his program. So he's still just following syntactic rules on symbols. The semantics is the picture (as I recall, back when I was a kid, it was usually a playboy centerfold). It's important for his argument that he can't figure out what the symbols mean, so it's important that he's never able to actually see the picture --- like if he gets the characters one at time. He only manipulates them syntactically (i.e. meaninglessly, e.g. sorting them into piles, comparing them to tables, putting some of them into the filing cabinets, taking others out, comparing #20 to #401 to see if they're the same, counting the ones that match, writing that down and putting it in drawer #3421, going to filing cabinet #44539 and getting his next instruction, etc.), never noticing that all of these characters would make a picture if laid out the floor in the right order. Eventually he gets to an instruction that says "Grab big squiggly boxy character #73 and roundish dotted character #23 (etc) and put them through the slot." And the guy outside the room reads: "Wow. Reminds of me of my ex-wife. You got any more?" The virtual Chinese Mind saw the picture, but Searle didn't.
The point is, his program never lets him know what the input or output means, and that's the sense in which his actions are syntactic. It's syntax because the symbols don't mean anything to him. He doesn't know what the Chinese Mind is hearing (or seeing) and he doesn't know what the Chinese Mind is saying.
In answer to your question about the argument, is supposed to go something like this. The only step in the argument that is controversial is marked with a "*".
  1. CR has syntax. CR doesn't have semantics.* Therefor syntax is insufficient for semantics.
  2. Brains cause minds, i.e. brains must have something that causes a mind to exist, we don't know what it is, but it's something. Let's call it "causal powers". Brains use causal powers to make a mind.
  3. Every mind has semantics. CR doesn't have semantics. Therefor CR doesn't have a mind. Therefor CR doesn't have causal powers.
  4. Computers only have syntax. Syntax is insufficient for semantics. Every mind has semantics. Therefor computers can't have a mind. Therefor computers don't have causal powers.
Again, the only real issue is "CR doesn't have semantics". Everything else should be pretty obvious.
"Has syntax" means "uses symbols as if they didn't stand for anything, as if they were just objects."
"Has semantics" means "uses symbols that mean something", or in the case of brains or minds, "has thoughts that mean something."
Does that help? ---- CharlesGillingham (talk) 09:06, 24 February 2008 (UTC)
Yes, thanks, it helps me understand Searle's argument better. Part of the reason Searle's description of the CR is so engaging (to me) is that he makes it easy to see where the "illusion of understanding" comes from. The syntax of the input language (Chinese in this case) allows the program to decode inputs like <man symbol> <bites symbol> <dog symbol> and produce output <dog symbol> <hurt symbol> <? symbol> without an "understanding" of what dogs are or biting is. When the input has no rules of syntax for the program exploit, I can't imagine how the program parses it and produces the "illusion of understanding". Of course, this is unimportant to Searle's argument. The CR can only processes using it's syntax rules, it has no "understanding". It doesn't matter to Searle where the "illusion" comes from, it only matters that there is no "real' understanding since the CR uses only syntax to arrive at a response.
I want to ponder on this thought: if the input contains no rules of syntax which encode/hide/embed a semantic then any apparent semantic produced must be "real semantic" and not "illusion of semantic". Once again, that will depend on what is meant by "semantic" and "understanding". Like a magician pulling a quarter out of your ear when beforehand we checked every possible place in the room and on the magician and on you (except your ear) and found no hidden quarters. If he could pull a quarter out of your ear, it's either really magic or the quarter was in your ear. Gwilson (talk) 15:33, 25 February 2008 (UTC)

(deindent) There is a presumption in the discussion here, and with Searly argument in general, that it is relatively easy to imagine a machine which does not have "real semantics" and yet behaves as if it does, producing symbols like "dog hurt" from "man bites dog" in a sensible way without including data structures which correspond to any deep understanding of what the symbols mean.

This intuition is entirely false, and I don't think many people who have done serious programming believe it. If you actually sit down to try to write a computer program that tries to extract syntactical structure from text for the purpose of manipulating it into a reasonable answer, you will very quickly come to the conclusion that the depth of data structures that is required to make sense out of the written text is equal to the depth of the data structures in your own mind as you are making sense out of the text. If the sentence is about "dogs" the program must have an internal representation of a dog capable of producing facts about dogs, like the fact that they have legs, and bark, and that they live with people and are related to wolves. The "dog description" module must be so sophisticated that it must be able to answer any concievable intuitive question about dogs that people are capable of producing without thinking. In fact, the amount of data is so large and so intricately structured that it is inconceivable that the answer could predictibly come out with the right meaning without the program having enough data stored that the manipulations of the data include a complete understanding. Since the data structures in current computers are so limited and remote from the data structures of our minds, there is not a single program that comes close to being able to read and understand anything, not even "One fish two fish red fish blue fish".

This is known to all artificial intelligence people, and is the reason that they have not succeeded very well in doing intuitive human things like picture recognition. Searle rewords the central difficulty into a principle "It is impossible to produce a computational description of meaning!" But if you are going to argue that the Turing test program is trivial, you should at least first show how to construct a reasonable example of a program that passes the Turing test, where by reasonable I only mean requiring resources than can fit in the observable universe.Likebox (talk) 17:40, 25 February 2008 (UTC)

You've just given the "contextualist" or "commonsense knowledge" reply, served as as is customary with a liberal sprinkling of the "complexity" reply. (Which I find very convincing, by the way. And so does Daniel Dennett and Marvin Minsky. Your reply is very similar to Dennett's discussion in Consciousness Explained.) You're right that "the depth of data structures that is required ... is equal to the depth of the data structures in your own mind", as it must be. Unfortunately for defeating Searle, he doesn't agree there are 'data structures' in your mind at all. He argues that, whatever's in your head, it's not "data structures", it's not symbolic at all. It's something else. He argues that, whatever it is, it is far more complicated than any program that you can imagine, in fact, far more complicated than any possible program.
Note that, as the article discusses (based on Cole, Harnad and their primary sources), an argument that starts out "here's what an AI program would really be like" can, at best, only make it seem more plausible that there is a mind in the Chinese Room. At best, they can only function as "appeals to intuition". My intuition is satisfied. Searle's isn't. What else can be said? ---- CharlesGillingham (talk) 18:34, 25 February 2008 (UTC)

Forest & Trees

I've made a number of changes designed to address the concerns of an anonymous editor who felt the article contained too much "gossip". I assume the editor was talking about the material in the introduction that gave the Chinese Room's historical context and philosophical context. I agree that this material is less on-point than the thought experiment itself, so I moved the experiment up to the top and tucked this material away into sections lower in the article where hopefully it is set up properly. I put the context in context, so to speak. If these sections are inaccurate in any way (i.e., if there are reliable sources that have a different perspective) please feel free to improve them. ---- CharlesGillingham (talk) 19:23, 7 February 2008 (UTC)

Definition of Mind, Understanding etc

I think we've touched on this before in the talk section, but one of the things Searle doesn't do is define what he means by "has mind" or "Understands". He says in his paper "There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument.". He makes this claim, because we can all agree that whatever "understanding" is , we know that Searle doesn't have it in regards to Chinese.

However, at the end of the day he has left a vital part of his argument undefined and in doing so prevented people from discovering potential undiscovered flaws in his argument. While Searle's argument has a certain mathematical "proofiness" about it, because he doesn't define key terms like "understanding" or "has mind" it isn't a real proof, only an interesting philosophical point of view.

What I'm wondering is, can we somehow get the fact that Searle doesn't define understanding into the first few paragraphs? Something like: "Searle does not attempt to define what is meant by understanding. He notes that "There are clear...." ".

The Turing test deliberately avoids defining terms like mind and understanding as well. So, I think we could follow those words with something like CharlesGillingham early words here "When it comes to consciousness, Turing recommended we follow a "polite convention" that if it acts like it thinks, we'll just go ahead say "it thinks"."

Does anyone feel that would improve the article? —Preceding unsigned comment added by Gwilson (talkcontribs) 14:49, 28 February 2008 (UTC)

This could fit in nicely right after (or mixed in with) the paragraph where David Chalmers argues that Searle is talking about consciousness. I like the Searle quote. The truth is, defining "understanding" (or what philosophers call intentionality) is a major philosophical problem in its own right. Searle comes from the ordinary language philosophy tradition of Ludwig Wittgenstein, J. L. Austin, Gilbert Ryle and W. V. O. Quine. These philosophers insisted that we always use words in their normal ordinary sense. They argue that, "understanding" is defined as: 'what you're doing when you would ordinarily say "Yes, I understand."' If you try to create some abstract definition based on first principles, you're going to leave something out, you're going to fool yourself, you're going to twist the meaning to suit your argument. That's what has usually happened in philosophy, and is the main reason that it consistently fails to get anywhere. You have to rely on people's common sense. Use words in their ordinary context -- don't push them beyond their normal limits. That's what Searle is doing here.
Turing could fit in two places. (1) In the paragraph that argues that the Chinese Room doesn't create any problems for AI research, because they only care about behavior, and Searle's argument explicitly doesn't care how the machine behaves. (2) In the "other minds" reply, which argues that behavior is what we use to judge the understanding of people. It's a little awkward because Turing is writing 30 years before Searle, and so isn't directly replying to the Chinese Room, and is actually talking about intelligence vs. consciousness, rather than acting intelligent vs. understanding. But, as I said above, I think that Turing's reply applies to Searle, and so do Norvig & Russell. ---- CharlesGillingham (talk) 10:00, 4 March 2008 (UTC)
Turing is in there now, under "Other minds". ---- CharlesGillingham (talk) 17:41, 2 April 2009 (UTC)

Footnote format

This is just as aesthetic choice, but, as for me, I don't care for long strings of footnotes like this.[1][2][3][4][5] It looks ugly and breaks up the text. (I use Apple's Safari web browser. Perhaps footnotes are less obtrusive in other browsers.) So I usually consolidate the references for a single logical point into a single footnote that lists all the sources.[6] This may mean that there is some overlap between footnotes and there are occasionally several footnotes that refer to the same page of the same source. I don't see this is as a problem, since, even with the redundancy it still performs the essential functions of citations, i.e. it verifies the text and provides access to further reading. Any one else have an opinion? (I'm also posting this at Wikipedia talk:Footnotes, to see what they think.) ---- CharlesGillingham (talk) 17:30, 24 March 2008 (UTC)

I have recombined the footnotes by undoing the edit that split them up. I admire the effort of the anonymous editor who undertook this difficult and time consuming task, but unfortunately I found mistakes. For examples a references to Hearn, p. 44 was accidentally combined with a reference to Hearn p. 47. Also, as I said above, I don't think the effort really improved the article for the reader. Sorry to undo so much work. ---- CharlesGillingham (talk) 06:17, 27 March 2008 (UTC)
This link no longer works (aol killed the site): Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417–457, http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html, retrieved on October 8 2008 . Myrvin (talk) 09:23, 9 April 2009 (UTC)
Repaired, link now points to most recent archive.org version. Paradoctor (talk) 14:00, 13 May 2009 (UTC)

Empirical Chinese Rooms

I'm posting here in the discussion section because I have a conflict of interest, but I think this is the best place to bring my question....

Recently I self-published an article on Philica called The Real Chinese Room, in which I replicated Searle's experiment using a version of the ELIZA code. As I discovered later, Harre and Wang conducted a similar experiment in 1999, and published it in the Journal of Experimental & Theoretical Artificial Intelligence. (11 #2 April) I haven't been able to find a copy, but from their very terse abstract it would appear that their experiment confired Searle's assumption. Mine did not.

It seems like at least Harre and Wang's work would be a useful contribution to the article if anyone could locate it. My own article has not been peer-reviewed to date, so it is not a reliable source (yet). But I would hope that the article could be expanded to look at the empirical work done on this problem. Ethan Mitchell (talk) 21:31, 8 May 2008 (UTC)

I don't understand how someone could "replicate Searle's experiment using a version of the ELIZA code." ELIZA is light-years away from being able to "perform its task so convincingly that it comfortably passes the Turing test." Dlabtot (talk)

Removed text (could be re-added)

I removed this new paragraph because it was unencyclopedic and it had no sources.

But if the man has memorized all the rules, allowing him to produce Chinese using only his mind, has he then not learned Chinese? Is that not how learning a language goes, by memorizing larger and larger portions of grammar and thereby producing valid sentences?

However, this is an legitimate argument that has been made by some scholars. This could be used if it was written like this:

Gustav Boullet has argued that, to the the contrary, the man who has memorized the rules does understand Chinese, although in an unusual way.[7]

That is, with an attribution to a scholar and footnote containing a reference to that scholar's work. ---- CharlesGillingham (talk) 15:50, 15 October 2008 (UTC)

Chinese Box is not another name for Chinese Room argument

This argument is not, to my knowledge, ever known as the "Chinese Box" argument, so I have removed this from the intro.

A "Chinese Box" is a box that contains other boxes. It's a metaphor for a complicated problem. Larry Hauser wrote a paper called "Searle's Chinese Box: Debunking the Chinese Room Argument," which is a kind of play on words, connected the "chinese box" metaphor with the the "chinese room" argument.

If you Googol '"chinese box" Searle' I don't believe you will get anything substantial except Hauser's paper, citations of Hauser's paper, etc. Correct me if I'm wrong. ---- CharlesGillingham (talk) 05:15, 10 December 2008 (UTC)

Original research

Dears, I'd really like to delete all paragraphs titled "What they do and don't prove." Such personal opinions on the topics are not acceptable in an encyclopaedia as they represent original research, don't they? This discussion page is the right place for such discussion paragraphs. Or am I allowed to write down my personal opinion on the strenght of some argument everywhere in any main article?

91.6.122.241 (talk) 10:01, 6 January 2009 (UTC) Marco P

This article presents a series of arguments and counter arguments and counter-counter arguments. Each of the arguments presented is based on reliable sources. (including those in the "what they do and don't prove" sections). None of them are based on personal speculation or original research. The ample footnotes should indicate exactly where this material is coming from. ---- CharlesGillingham (talk) 12:05, 9 January 2009 (UTC)
No, there this article is ALL bias and attempts to show that Searle correct by assuming he is correct. Merely citing opinions doesn't make this a legitimate article - especially when one side of the argument is being entirely favored over the other.
Oh, by the way, I dislike the title "What they do and don't prove", because it's too chatty for an encyclopedia. I just couldn't come up with a better title for these parts. (The article needs these parts because Searle makes similar counter-counter arguments to all the replies in each section. "What they do and do prove" is a way to put these all in one place. Unless we want to repeat ourselves relentlessly in a twenty page article, we need to place Searle's reply-to-the-replies in one place.) ---- CharlesGillingham (talk) 17:52, 2 April 2009 (UTC)

Comment about a ref

I removed from the main article the following comment by an unregistered user "Sorry, I don't know where else to put this edit: note #58 references a Dennett work from 1997, but no such work appears in the References section. Maybe it is supposed to be his 1991 work." Kaarel (talk) 20:04, 21 January 2009 (UTC)

This is fixed. The anon is right: the quote comes from Dennett's Consciousness Explained, (1991) ---- CharlesGillingham (talk) 09:38, 22 January 2009 (UTC)

Why is this even here?

This argument is silly. If the machine was instead a human chinese speaker, you could make the same claim. That is the ignorant man could simply hand the symbols to the chinese speaker, and hand back the results. He could also memorize the symbols in this manner.

The attempted formal argument just begs the question. It assumes Strong AI is false by assuming that computers are only symbol manipulating syntactic machines. In real life, computers can take images, sounds etc as input just as humans can.

His abstract claims regarding the computer not being a mind have to do with a bunch of metaphor based reasoning involving things like the fact that the machine doesn't look like a human... —Preceding unsigned comment added by 96.32.188.25 (talk) 11:32, 16 April 2009 (UTC)

This whole article is just one big fallacy. —Preceding unsigned comment added by 96.32.188.25 (talk) 11:29, 16 April 2009 (UTC)

Most scholars agree that the argument is "obviously wrong", but in order to show this, you need to be able to tell Searle exactly what a "mind" is and how we would know that a machine has one. This is harder than it looks at first glance, and that is why the thought experiment is so notable.
By the way, the images and sounds that a computer uses are, in fact, composed of symbols: the 0s and 1s of computer memory. Searle would argue that the computer only sees these bits, and can never see the picture. ---- CharlesGillingham (talk) 19:02, 18 April 2009 (UTC)
No, the images and sounds actually can be composed of analog input. I think you would make many Electrical Engineers mad with such a foolish statement. —Preceding unsigned comment added by 134.84.0.116 (talk) 22:13, 16 November 2009 (UTC)
yet he doesn't have any quarrel in the fact that humans only see it as 'photons'; somehow neural chemistry performed on photons is more magical than electromagnetic logic performed on electrons. Although I understand this article needs to exist for historical reasons, I still don't see why it's being portrayed as a valid conjecture, and not what it really is: utterly subjective poppycock.70.27.108.81 (talk) 09:53, 16 November 2009 (UTC)
I agree that Searle's arguments are pretty much without any solid foundation: ridiculous and impossible premises and flawed definitions leading to false conclusions. But the article exists because the subject is notable, and the existence of the article is completely unrelated to the validity or lack of validity of Searle's ideas. Dlabtot (talk) 17:46, 16 November 2009 (UTC)
Except for the fact that the article takes the position that Searl's argument is sound - which isn't the common consensus amongst scientist (Neural, Psychological, Computer). It need a section that says that his position is ludicrous and he assumes what he is trying to prove. —Preceding unsigned comment added by 134.84.0.116 (talk) 22:16, 16 November 2009 (UTC)
I don't agree with your assertion that "the article takes the position that Searl's argument is sound". However you are welcome and encouraged to attempt to improve the article. Dlabtot (talk) 22:27, 16 November 2009 (UTC)

Disclosure

I just did a little work on the Further Reading section, which links to the arXiv preprint "Demolishing Searle's Chinese Room", which was written by me. I'd like to point out that I did not place the link, and was in fact a bit surprised to discover it here. —Preceding unsigned comment added by Paradoctor (talkcontribs) 13:04, 28 April 2009 (UTC)

This argument disproves the wrong point

As I read it, the human in this argument acts as the CPU and manually executes the AI software. Since when has anyone ever argued that the CPU was what was artificially intelligent (or had any understanding) instead of the software? I would say this entire argument is a quintessential example of a strawman -- Skintigh (talk) 21:44, 5 May 2009 (UTC)

Except for the strawman part, this is an inferior version of systems reply. If you want to discuss the Chinese room argument, please go to comp.ai.philosophy, or some other forum where this topic is debated. The talk page is only for discussion on how to improve the article. Please take a few minutes to read Wikipedia:Talk#How_to_use_article_talk_pages. Have a good one, Paradoctor (talk) 07:16, 6 May 2009 (UTC)
Talk pages are designed for discussion of anything that can make the article better. This means that the subject of the article needs to be discussed to some extent, to determine how credible it is, so that the appropriate tone and style can be decided. That's how you decide the tone for "intelligent design" or for "flat earth".
Every technically trained person immediately sees the fallacy in Searle's argument, so what Skintigh is saying is that this argument should not be made to look so respectable. It should be given the same treatment as other obvious fallacies. But in this case you run into C.P. Snow's "two cultures" phenomenon. Only one of the cultures thinks that the argument is totally bogus.
There's also a certain question of priority, again within two cultures. A version of this argument is made by Turing in the Turing test paper, as the "argument from internal experience". So maybe this article should start "The Chinese room is Searle's earnest elaboration of Turing's 'argument from internal experience' into a philosophical position rejecting artificial intelligence". But Turing didn't believe this argument, so he doesn't get priority. Is this fair?Likebox (talk) 14:24, 26 May 2009 (UTC)
"to determine how credible it is": I'm afraid you're mistaken here. May I refer you to WP:TRUTH? What has to be discussed is which reliable sources hold which view on the "credibility" of the subject. This is done by providing quotations and/or referenced summaries of such views.
"technically trained person immediately sees the fallacy": That is your opinion, not what the literature indicates. Someone, I believe Searle himself, noted that while his critics all agree he is wrong, everyone has a different argument. (Quite like the second Dingle controversy, BTW.)
A version of this argument is made by Turing in the Turing test paper, as the "argument from internal experience".: Turing's paper does not contain a discussion of "argument from internal experience". Regards, Paradoctor (talk) 17:44, 26 May 2009 (UTC)
It's called "The Argument From Consciousness" in Turing's paper, sorry I misremembered. It's section 4 of the responses part.
By "credibility" I do mean the opinion on credibility of the subject by reputable sources. Every single technical person, all papers by technical authors that I have read, dismiss Searle's argument as infantile. This is balanced by the "other culture" sources, where people talk about this argument as if it had weight.
So what do you do? It's analogous to creationism. In evangelical christian circles, creationism is valid. In scientific circles, it's obvious bunk. This is a secular version of the same thing. In humanistic philosophical circles, Searle's argument is considered valid. In technical circles, its bunk. Not to be bigoted, but I tend to give more weight to the technical people on matters involving AI.Likebox (talk) 19:07, 26 May 2009 (UTC)
The argument from consciousness is not Turing's, but Jefferson's, so he should get credit, if anyone. Now, if you can cite a reliable source stating that Jefferson has priority, then that belongs into the article. Otherwise, it doesn't, I'm afraid.
"people talk about this argument as if it had weight": These "people" are scientists publishing peer-reviewed research. The fact that other scientists disagree does not give us licence to pick the opinion we like best. In cases where no scientific consensus exists, we have to report on the various points of view, duly weighted. And the weights are not established by us, but by reliable secondary and tertiary sources. Of the cited, reliable kind.
"evangelical christian" ... "secular version" ... "humanistic philosophical": Owch. Generally speaking, I'm not a fan of philosophy, but ... sheesh, man! ^_^ Paradoctor (talk) 10:57, 27 May 2009 (UTC)
Well, you need to attack garbage, otherwise there will be no progress in this philosophy business. They're considering important questions, they shouldn't be stuck at first base.
The people who support this argument are not scientists, they're philosophers. Scientists (if they have a position, which isn't always) often accept the identification of mind and software, and strong AI.Likebox (talk) 14:02, 27 May 2009 (UTC)
"need to attack garbage": WP:GREATWRONGS In here, we do not attack garbage, we report on it, but only if it stinks enough. ;)
"not scientists, they're philosophers": That is your opinion, and to a degree, I share it. But that doesn't mean Wikipedia ignores them. Their opinions and ideas are notable, and thus they are reported on in here. Paradoctor (talk) 17:51, 28 May 2009 (UTC)
It is simply false that "Every single technical person, all papers by technical authors that I have read, dismiss Searle's argument as infantile." Jeff Hawkins, inventor of the palm pilot, founder of the promising Numenta AI research program, argues that Searle is right in his book On Intelligence. The leading AI textbook (Russell & Norvig) says that most AI researchers consider the argument "irrelevant", which is not the same thing as "infantile". (This article has a section (Chinese Room#Strong AI v. AI research) which tries to explain that, even if Searle's Strong AI turns out to be false, there is still no limit to what AI can accomplish.) The Chinese Room, as the article explains, is really not about AI at all. It is an argument about the ontological status of the human mind. This is not a trivial topic that can be easily dismissed as having an obvious answer. ---- CharlesGillingham (talk) 02:01, 29 May 2009 (UTC)

Chinese Room in popular culture

I'm new at all this, so pardon any mistakes. I'd added a mention of a film called The Chinese Room, and I see that this info was removed on 2 May as non notable trivia. I'd like to discuss this. I'll disclose that I do know something about the production of this film, but I have a degree in philosophy and my interest is in enriching this article.

My original reasoning was that a cultural work that engages with a topic can be important, as the artwork allows the idea into wider culture. In this case, a film is named after the CR argument (thus it shares a name with this article) and has an explicit dramatization of the argument. I'm wondering - for an artwork like this film, does notability come from the degree to which it engages with the topic, or does it come from the fame of the artwork? For instance, an imperfect comparison might be the book and related film(s) In Cold Blood which are mentioned in the article on the murderer Richard Hickock. In Cold Blood is very famous and its writing impacted the events it describes, so a better example might be the mention of the film Finding Neverland in the article on J.M.Barrie, the film's (far more famous) subject. As to norms of notable trivia, I see many pages in which a half-dozen episodes of The Simpsons (or even less important TV shows) are logged for referencing a topic. Even if that level of pop-culture referencing is undesirable, a film like this seems to me more of a "good-faith" cultural reference than does a catalog of lines from TV shows. Maybe in this case, Pop-Culture References is not the right section header to use.

I do see the need to clean up loose ends in articles, and I'm aware that a pop-culture reference isn't vital to the rest of the article. However, I looked at discussions in the Village Pump section and there doesn't seem to be a consensus that references like this should be removed. Thanks everyone, Reading glasses (talk) 05:16, 12 May 2009 (UTC)

Welcome to Wikipedia. Your talk page has a welcome message with a few useful links for you. You've raised a good point. Before we dive into discussion, I'd like to know from Dlabtot what criteria (policies/guidelines/...) he applied in labeling the reference as non-notable. Paradoctor (talk) 18:41, 12 May 2009 (UTC)
As far as I can tell, the film is not in any way notable. As in not worth noting. Didn't even receive any mainstream reviews that I found. Comparisons to The Simpsons, one of the most popular TV shows of all time, and a pop-culture phenomenon nonpariel, or In Cold Blood, a best-selling novel, are patently ridiculous. Dlabtot (talk) 23:12, 16 May 2009 (UTC)
And, with all due respect, you have things backwards. If you think it is notable, you need to show the sources that demonstrate this. Dlabtot (talk) 23:18, 16 May 2009 (UTC)
Notability "refers to whether or not a topic merits its own article", and is not at issue here. The correct yardstick is relevance to the subject. Offhand, I'm not aware of any other philosophical thought experiment that has inspired a feature film. This is a unique and distinguishing feature and clearly belongs into an encyclopedic article. Paradoctor (talk) 00:04, 17 May 2009 (UTC)
Encyclopedic content must be verifiable to reliable sources. Why don't you provide some reliable sources that talk about this film, and then we can have some discussion about whether it is relevant to this article. But we need to have some sources first, not just a listing on a website that lists all films. Dlabtot (talk) 00:27, 17 May 2009 (UTC)
I'm waiting to hear from the director. Meanwhile, do you have any other arguments against inclusion? Regards, Paradoctor (talk) 15:03, 17 May 2009 (UTC)
This isn't a debate; if you come up with any sources, we can discuss whether those sources justify inclusion of the material. Dlabtot (talk) 15:31, 17 May 2009 (UTC)
As long as you keep participating, it is. ;) Anyway, if you don't want to continue the debate right now, no problem. Paradoctor (talk) 17:48, 17 May 2009 (UTC)
Well, no. This is an encyclopedia, not a debate. Whether material that is not cited to reliable sources should be included in Wikipedia is not up for debate. Dlabtot (talk) 18:32, 17 May 2009 (UTC)
Then again, I didn't say we should, did I? According to the director, the film should be commercially available "within perhaps two months". I'll see if I can get more out of him while we wait. Paradoctor (talk) 20:07, 17 May 2009 (UTC)
You might want to read WP:UNDUE while you're at it. Dlabtot (talk) 00:28, 17 May 2009 (UTC)
Sure. Why? Paradoctor (talk) 15:03, 17 May 2009 (UTC)
Because it is an important and highly relevant portion of our policies. Dlabtot (talk) 15:31, 17 May 2009 (UTC)
While I appreciate books levitating my way, it doesn't answer the question. Let me lay it out for you: WP:UNDUE is about the relative attention given to a set of conflicting views on a given topic. This has nothing to do with the cultural impact of the article's subject. Paradoctor (talk) 17:48, 17 May 2009 (UTC)
Putting obscure material or viewpoints into articles that has not received any discussion in reliable sources certainly would give that material or viewpoint undue weight. Again, if you feel so strongly that this should be included, you should just find some sources in support. Arguing about it won't achieve anything. Dlabtot (talk) 18:30, 17 May 2009 (UTC)
At the very least it might serve to educate me. ;) Anyway, your last reply was sufficiently enlightening for the moment. I look forward to resuming our little chat when citable material has arrived. Regards, Paradoctor (talk) 20:07, 17 May 2009 (UTC)
Thanks for the discussion. Dlabtot, your point about reliable sources is well taken. As to comparisons to something like The Simpsons, it might be ridiculous to argue that the film is equally important culturally, but my analogies were about the relationship of the cultural work to the topic - I still assert that a film about a topic is more relevant to that topic than a single reference in an unrelated TV series.
Wouldn't the ultimate question here be something like, Would a person interested in this topic want to know that this film exists? Why not err on the side of inclusion? Reading glasses (talk) 17:08, 17 May 2009 (UTC)
Unless someone has some sources to bring forward, there is really nothing to talk about. Our policies are clear. As I said above, this is not a debate. If sources exist that warrant inclusion of the material, it should be included. Dlabtot (talk) 17:15, 17 May 2009 (UTC)


Official site Bare Bones page, click on "pictures", one of the images should be recognizable to anyone in here. I can provide a screenshot if the page doesn't work in your browser. Paradoctor (talk) 15:19, 19 May 2009 (UTC)

I acknowledge the director's desire to use Wikipedia to promote his film, which is understandable since it seems to have been ignored by every reliable source on the planet earth. Dlabtot (talk) 15:44, 19 May 2009 (UTC)
Are you saying he is misrepresenting the relationship between his film and the thought experiment? Paradoctor (talk) 16:03, 19 May 2009 (UTC)
No, I'm saying: "I acknowledge the director's desire to use Wikipedia to promote his film, which is understandable since it seems to have been ignored by every reliable source on the planet earth." Dlabtot (talk) 17:36, 19 May 2009 (UTC)
Let me rephrase that: Is Rulf misrepresenting the relationship between his film and the thought experiment? Paradoctor (talk) 18:26, 19 May 2009 (UTC)
Who cares? I don't have the slightest interest in this unknown person's comments about his un-notable film. This talk page is intended for discussion of improvement to the Chinese room article. Encyclopedic content must be verifiable. If you find any reliable sources that discuss this film, don't hesitate to post them here. Dlabtot (talk) 18:38, 19 May 2009 (UTC)
That's what we're talking about, improving the article. We have a difference of opinion whether mentioning the film improves the article or not. Are you going to participate in that discussion, or do you prefer to participate in dispute resolution? Paradoctor (talk) 19:20, 19 May 2009 (UTC)
Well, I have been talking about trying to improve the article. That's why I've repeatedly encouraged you to find reliable sources that verify any material you wish to put into the article. By all means, if you wish to pursue some sort of dispute resolution, do so. Dlabtot (talk) 20:04, 19 May 2009 (UTC)
Just let me get this straight: You do not consider an explicit statement from the director/writer of the film a reliable, verifiable source on what the film is about? Paradoctor (talk) 20:55, 19 May 2009 (UTC)
Please post your question on the reliable sources noticeboard. I also strongly encourage you to pursue whatever form of dispute resolution you believe is appropriate. Dlabtot (talk) 21:03, 19 May 2009 (UTC)
PS, I should mention that as a regular contributor at RSN, I customarily recuse myself from discussions there when I have prior involvement. And, BTW, I find more promising as possible RS, http://www.barebonesfilmfestivals.org/ and it also seems likely that there may have been coverage in local media in Muskogee, OK, which would probably be RS... Dlabtot (talk) 01:50, 20 May 2009 (UTC)
I'm not sure about your rationale for recusing yourself, as nothing on the noticeboard states that the replies are authoritative. Then again, that's your choice. The festival page may be RS or not, but it has not enough information. WRT the local newspapers, I've had that idea, too. Only the Muskogee Phoenix, and they just list festival winners. As requested, I was at the RSN, and you no doubt have read the discussion. Presuming the statement appears in citable form, would you still oppose mentioning the movie? As for the format, I was thinking about uploading the mail to Wikisources as the most convenient approach. Paradoctor (talk) 14:37, 23 May 2009 (UTC)
A mention that a movie won an award would be appropriate for an article about that movie. I don't see what it would add to someone's understanding of Searle's thought experiment. Dlabtot (talk) 17:16, 23 May 2009 (UTC)
My words: "mentioning the movie". Your words: "mention that a movie won an award". Can you see how someone would get confused? To be totally clear: I only gave a summary of the Muskogee Phoenix article in order to show that the article of no use to this article, because it contains only facts already available on the festival site. What I do want to be mentioned in the article is that the movie exists, and how and why it relates to the thought experiment. Basically, a sourced and updated version of what we already had.
As far as "add to someone's understanding of Searle's thought experiment" is concerned, your missing the point of a cultural references section. Check this out, and tell me how the works mentioned help understanding the event that inspired them. The point of cultural references is to report on the influence the topic has outside of the specialist context, not to provide further exposition. Paradoctor (talk) 18:39, 23 May 2009 (UTC
No, I'm not missing the point. The operative word is 'pop', as in popular. Unknown, undistributed, unreviewed movies are not part of pop culture. Nor really a part of our shared culture. I really have no idea why this argument is continuing, frankly. Unless there are sources that discuss the movie, there is nothing to talk about. Dlabtot (talk) 18:45, 23 May 2009 (UTC)
PS, I recuse myself because the noticeboard is for getting fresh eyes and opinions about sourcing issues. Often, parties to a dispute come to the noticeboard and dominate the discussion by repeating the same arguments that they were not able to resolve on their own. It's not helpful. Dlabtot (talk) 17:57, 23 May 2009 (UTC)
I understand, so you're not trusting yourself? ;) Paradoctor (talk) 18:39, 23 May 2009 (UTC)
I have no idea what your comment is supposed to mean, although it sounds vaguely insulting. If you wish to make comments about me, it would be more appropriate to make them on my talk page, rather than here. Dlabtot (talk) 18:43, 23 May 2009 (UTC)
Trust me, if I had intended to insult you, there would have been nothing vague about it. I tried to lighten the mood a little, including smiley. For the record: I did not intend to slight you in any way. Paradoctor (talk) 21:15, 23 May 2009 (UTC)
I really don't care whether you insult me or not, that's not the point. The point is that this page is not intended for commentary about me by you, whether positive or negative. So please stop. Dlabtot (talk) 23:59, 23 May 2009 (UTC)

New image

File:ChineseRoom2009 CRset.jpg

We now have a legal screenshot. I'd like to use this as the new illustration. Comments? Paradoctor (talk) 21:40, 20 May 2009 (UTC)

Dlabtot, could you please state your opinion? I want to avoid needless edit/revert cycles. Paradoctor (talk) 14:39, 23 May 2009 (UTC)

Perhaps you could state your opinion first. Why do you want to use this illustration? In what way does it improve the article? Does it better illustrate the thought experiment that is the subject of the article than the current illustration, and if so, why? Dlabtot (talk) 17:13, 23 May 2009 (UTC)
Fair enough. The current illustation does not illustrate the thought experiment. It shows someone sitting on a chair in front of a door, with a slip of paper on the floor between. Even someone who knows the thought experiment would not recognize it without being primed. The new image features input/output slots, Chinese characters, a set of books representing the program, and an operator actually working. The new image is a clear improvement over the old one. Paradoctor (talk) 18:58, 23 May 2009 (UTC)
I'd say it has a false air of verisimilitude. It's clear from the current illustration that this is a thought experiment - and it allows the thinker free reign to do the thinking. From what I know of computing and the kind of processing power that would be required to pass a Turing test (not yet achieved... will it ever?), I certainly don't picture a little room with a few books in it, and I think such an illustration would hurt a less knowledgeable person's understanding of what is involved. Dlabtot (talk) 19:15, 23 May 2009 (UTC)
  • "I'd say it has a false air of verisimilitude.": What is that supposed to mean?
  • "It's clear from the current illustration that this is a thought experiment - and it allows the thinker free reign to do the thinking.": The article is about the Chinese room, not about thought experiments in general.
  • "I think such an illustration would hurt a less knowledgeable person's understanding of what is involved.": How? In what way? And why do you think the old image is superior in that respect?
  • Paradoctor (talk) 21:11, 23 May 2009 (UTC)
False means not true. An air of verisimilitude means the appearance of truth. It would hurt someone's understanding in many ways. One, it lends credence the the idea that a Chinese room could exist in reality and not merely as a thought experiment. But even assuming that a Chinese room could exist in reality, what is described in this article as "a book with an English version of the aforementioned computer program, along with sufficient paper, pencils, erasers and filing cabinets" would constitute a vast array of material. Such a room would look nothing at all like this picture. It would look more like the Vehicle Assembly Building filled to the roof with filing cabinets, and the human operator would not be calmly examining a piece of paper and a book but would be furiously running around accepting input and then looking up and carrying out tens of thousands of millions of instructions per second on that input. I did look at other thought experiment articles, but in my quick skim I didn't find many with illustrations. But those I did find, like Brownian ratchet, Schrödinger's cat, and Maxwell's demon, were pretty true to the actual thought experiment. This doesn't meet that test. It does look like it would be a good promo shot for Chinese Room (film), however.
On the other hand, I'm not a big fan of the current illustration either. If we assume the Chinese room is on the other side of the door, it makes sense, but that is not at all clear (until one thinks about it - of course, we are observing the Chinese room from the outside, not the inside). I wouldn't object to its removal. Dlabtot (talk) 22:39, 23 May 2009 (UTC)
To summarize your statement, and make sure I understood it:
  • The new image might lead readers to believe the Chinese room could be implemented in real life.
  • It does not illustrate all aspects of Searle's original description.
  • Even Searle's description is not adequate, because a "real" Chinese room would be vastly greater than implied in usual descriptions/visualizations.
  • The new image is not as good at its job as the illustrations an in a few other articles.
  • The old image is not as good at its job as the illustrations an in a few other articles.
  • You would not object to removing the old image.
Is that about correct? Please correct if not.
  • Do you intend to delete the old image?
Paradoctor (talk) 23:58, 23 May 2009 (UTC)
Ummm, no, I don't accept what you said I said as being what I meant. I actually meant what I did say. To take one example, I didn't say or mean to say that Searle's description is inadequate. I think it is perfectly adequate. Anyone with even a passing familiarity with AI and the difficulty computers have with the Turing test would realize the vast scale involved. And actually, for someone like that, this illustration does not pose a problem - they would recognize it as an abstraction. However, for the less sophisticated general reader who I think should be the target audience of Wikipedia articles, the implications inherent in this picture (but not present in Searle's description) would be misleading. The current image does not really imply anything (added:) about the nature of the workings of the Chinese room. It just shows a figure sitting in front of a door with a piece of paper on the floor. And since you are forcing me to repeat myself, I don't care if it stays or goes. If I had any intention to delete it, I would have done so. I don't really think it adds anything to the reader's understanding, but I'm balancing that against the fact that I think it makes a more visually interesting page - I like it from a layout perspective, as well as an aesthetic perspective. But it doesn't contain the misleading implications that the movie promo shot does - unless one makes the mistaken assumption that the viewer is inside the room. It is this last point that fuels my ambivalence about the current illustration. Dlabtot (talk) 00:18, 24 May 2009 (UTC)
  • Why do you keep asserting that the image is a promo shot? And even if it was, why would that be relevant?
  • So you wouldn't delete the old image because it is decorative, even though it "does not really imply anything", but you would delete the new image because it could lead readers into believing the Chinese room could be real? Paradoctor (talk) 01:03, 24 May 2009 (UTC)
Why do you keep asserting that the image is a promo shot? now I'm really confused. Is the image we are discussing not a promo shot from the movie?
If it is not, why does the file description state "from the 2009 fiction narrative feature The Chinese Room" and who is User:TheChineseRoom, and does he or she really have authority to release this image under the Creative Commons Attribution ShareAlike 3.0 License? Dlabtot (talk) 01:17, 24 May 2009 (UTC)
  • I see now the source of your confusion. A promo shot is an image specifically produced for an advertising campaign. Promo shots are by definition not part of the product. The image we're talking about is simply a frame from the film.
  • The file was uploaded by the director upon my request. I had thought that would be obvious, as I had stated more than a week ago that I had contacted him. Or did you really think the image popping up was coincidence?
  • Please reply to the second point in my previous reply. Paradoctor (talk) 01:54, 24 May 2009 (UTC)
As to the question of whether Wikipedia should act as a platform for the promotion of artistic works that have not previously been able to garner coverage in independent, published reliable sources, I would direct you to the essay, What Wikipedia is not. Dlabtot (talk) 01:26, 24 May 2009 (UTC)
  • Since nobody is promoting anything ... Paradoctor (talk) 01:54, 24 May 2009 (UTC)
I'm pretty confident that my comments expressed what I have to say pretty well. If I feel the urge to say anything else, I won't hesitate to do so. Dlabtot (talk) 01:59, 24 May 2009 (UTC)
  • Would please state your intentions wrt the new image? Delete or ignore? Paradoctor (talk) 02:02, 24 May 2009 (UTC)
Strictly as an aside, why do you keep bulleting all of your comments? It's pretty annoying and it would be really helpful if you would just follow the same talk page conventions as everyone else. I don't have any 'intentions' with respect to anything. I don't know why you think I should be expected to pre-declare what I'm going to do based on future events. I can't evaluate my reaction to things that have not happened. On the other hand, I have stated quite clearly that I think the movie image misrepresents the Chinese room, so I'm having difficulty understanding why there is any confusion about what I think. Also, we have an established paradigm for this sort of thing. I'm not going into this with any kind of agenda, other than writing an encyclopedia. The only reason I'm watching this page, is because I happened upon it, and I saw that it was promoting this unknown, undistributed, and unreviewed movie. That was clearly in contradiction of one of our core principles. Perhaps someday this movie will achieve some notoriety and significance to this topic and at that point it would warrant inclusion in this article. Dlabtot (talk) 02:21, 24 May 2009 (UTC)
"bulleting": I bullet to separate issues in my replies, trying to avoid a wall of text. The last two replies were an overshoot. Since you find it confusing, I'll leave them out.
"I don't have any 'intentions' with respect to anything." ... "I can't evaluate my reaction to things that have not happened." ... "I'm not going into this with any kind of agenda, other than writing an encyclopedia." ... "The only reason I'm watching this page, is because I happened upon it": In your own words: "this page is not intended for commentary about me".
"On the other hand, I have stated quite clearly that I think the movie image misrepresents the Chinese room, so I'm having difficulty understanding why there is any confusion about what I think.": There is confusion because you evade a simple, direct question: Are you going to delete the new image, should it turn up on the page, or not?
  • Yes: dispute resolution, because I find your behavior in this discussion unacceptable
  • No: no problem
"Also, we have an established paradigm for this sort of thing.": As you may remember, I used that exact method to start the discussion, after you failed to reply to a request for discussion. Since we're talking now, it serves no further purpose.
"I saw that it was promoting this unknown, undistributed, and unreviewed movie." ... "Perhaps someday this movie will achieve some notoriety and significance to this topic and at that point it would warrant inclusion in this article.": This discussion is about not about the movie, it is about the image. The fact that the image is from the movie is incidental, and has nothing to with the question of whether replacing the old image with the new one improves the article. Paradoctor (talk) 17:57, 25 May 2009 (UTC)

< My two cents. I like the current image. I like the way it captures our confusion about the Chinese Room: what exactly is a "mind" and how do we know if there is one behind that door? I also think its schematic quality is appropriate for a thought-experiment. And, finally, it is well drawn, aesthetic and looks good on the page. ---- CharlesGillingham (talk) 02:31, 24 May 2009 (UTC)

I also like the way the current image tends to engage the reader in the thought experiment, I just wish it were more clear that this is a perspective from outside the room. When I first visited this page, I found the image confusing. Dlabtot (talk) 03:27, 24 May 2009 (UTC)
"When I first visited this page, I found the image confusing.": Noted. Paradoctor (talk) 22:30, 25 May 2009 (UTC)
It is ambiguous. A caption might help.
Inside also works, at least for me. He could be looking at the meaningless symbol before he picks it up and walks into his (warehouse sized) work space to produce the room's response. We could suppose he is wondering for a moment exactly what the simulated mind is talking about — he has no idea what it is thinking or saying. He can see all the details of how the program works, but he doesn't see the mind anywhere.
Anyway, I like the fact that the image is evocative. ---- CharlesGillingham (talk) 13:19, 24 May 2009 (UTC)
Sorry for taking my time replying, Charles.
"our confusion": With all due respect, this "we" does not include me. ;) Seriously though, shouldn't an illustration help lessening confusion, rather than voicing it?
"schematic quality": I agree, a schematic visualization is preferable. On the other hand, it should contain at least some identifying elements of the thought experiment, of which I see none in the old image.
"well drawn, aesthetic and looks good": Yes, aesthetics should be considered. But is that really enough in this case?
"It is ambiguous. A caption might help." ... "could be looking at" ... "We could suppose": I think if you have to explain what the image is meant to illustrate, then you don't really need it.
For comparison, I've compiled a little list of visualizations: [1] [2] [3] [4] [5] [6] [7] [8] [9]. On some pages, you have to scroll down, or look for a link to illustration/animation.
Regards, Paradoctor (talk) 17:57, 25 May 2009 (UTC)
I love an editor who does his research! These are some good sources here.
On the new image: why don't we place it in one of the other sections?
The first section could use an illustration that shows the whole system, including the books, file cabinets, eraser, Chinese speaker, etc. It should be schematic, something like the illustration for Turing test. ---- CharlesGillingham (talk) 11:07, 27 May 2009 (UTC)
"does his research": I aims to please. ;) See below for more "research".
"one of the other sections": The idea to use both hadn't crossed my mind, but it sounds good to me. Section "Chinese room thought experiment" would be appropriate.
"first section could use an illustration that shows the whole system" ... "should be schematic": If we had something like that, we wouldn't have to decide between two non-optimal choices. I compiled the non-free illustrations to provide a view of what is generally considered an appropriate depiction. By that standard, the new image is preferable.
I conducted an informal survey, with the result that all four respondents prefer the new image over the old one. Except for the preference, the comments pretty much mirror what has been said: None is optimal, abstract is preferable, the new image has more elements. Regards, Paradoctor (talk) 12:40, 27 May 2009 (UTC)
P.S.: J. A. Legris suggested using the old image for the system reply, "where the guy has internalized everything except paper and pencil". Paradoctor (talk) 13:01, 27 May 2009 (UTC)

deleted section

comment deleted per Keep on topic Paradoctor (talk) 19:18, 28 May 2009 (UTC) Likebox (talk) 16:45, 28 May 2009 (UTC)

Although I agree with your analysis of Searle's argument, because of its notability and currency, it deserves the substantial coverage it has here, notwithstanding the ignorance demonstrated by its premises. Dlabtot (talk) 18:17, 28 May 2009 (UTC)
Dlabtot, the summary of your edit undoing my removal of Likebox's comment gave as reason "do not remove the talk page comments of other editors. see WP:TALK". First, did you note that my edit summary linked to the very same policy page? Did you even bother to read it? It is true that in general, other editor's comments should be left alone. However, there are a number exceptions to the rule. One of these exceptions is in section How to use article talk pages: "Irrelevant discussions are subject to removal.". This is restated in WP:TALKO as an explicit example of "appropriately editing others' comments": "Deleting material not relevant to improving the article (per the above subsection #How to use article talk pages)." (third item in the list). In case you need more convincing, more quotes from WP:TALK relevant to deleting comments:
  • "Article talk pages should not be used by editors as platforms for their personal views."
  • "it is usually a misuse of a talk page to continue to argue any point that has not met policy requirements"
  • "Editors should remove any negative material about living persons that is either unsourced, relies upon sources that do not meet standards specified in Wikipedia:Reliable sources or is a conjectural interpretation of a source."
  • "Talk pages are for discussing the article, not for general conversation about the article's subject (much less other subjects). Keep discussions on the topic of how to improve the associated article."
  • "Article talk pages should be used to discuss ways to improve an article; not to criticize, pick apart, or vent about the current status of an article or its subject."
  • "Talk pages are not a forum for editors to argue their own different points of view about controversial issues."
If you don't like the policy, change it. Regards, Paradoctor (talk) 19:18, 28 May 2009 (UTC)
I'm sorry, but it is clearly against our policy for you to arbitrarily and unilaterally decide which comments belong on talk pages, and no amount of wikilawyering or creative reading of the policy will change that. Please refrain from doing so in the future. Dlabtot (talk)

Computer Haters

The insipidness of this argument suggests that it should be treated like "flat earth". It is a transparent appeal to the limited imagination of the computer illiterate, who imagines that an unintelligent program of relatively small size and which has no real mental states could pass a Turing test. That's not going to work, becuause the number of people who are that illiterate about computers keeps shrinking.

The sad thing is that this article presents Searle's argument as correct! What? But, yes, time will make this type of nonsense look as foolish to everyone as it does to those who _actually_ study brains and computers. — [Unsigned comment added by 134.84.0.116 (talkcontribs).]

The annoying thing is that this argument was so influential in philosophy, and probably still is. Like flat earth, when something is this transparent to refute, nobody bothers.Likebox (talk) 16:45, 28 May 2009 (UTC)

Although I agree with your analysis of Searle's argument, because of its notability and currency, it deserves the substantial coverage it has here, notwithstanding the ignorance demonstrated by its premises. Dlabtot (talk) 18:17, 28 May 2009 (UTC)
Who did it influence? The DOD still spends millions on AI research. The NIH spends even more money on brain research. None of these people cared about Searle's argument - only, philosophers cared. — [Unsigned comment added by 134.84.0.116 (talkcontribs).]
Do you have a point? Dlabtot (talk) 22:51, 16 November 2009 (UTC)
I think the article makes it clear that the "overwhelming majority" of scholars think that Searle is "dead wrong" (see Chinese room#History). ---- CharlesGillingham (talk) 15:49, 29 May 2009 (UTC)
No it doesn't; it makes it look like his position is entirely relevant and correct. When 95% of the material is justifying his position and when there isn't a section titled "what most scientists think" or "why most people believe it's false" then it reeks of bias and seems like it's being presented as fact. — [Unsigned comment added by 134.84.0.116 (talkcontribs).]
Look, these types of comments by me are for the talk page, so that people will know where Searle's argument stands, really. The source I found for these type of obvious statements is pretty lousy, and given the "flat earth" problem, there aren't many better ones.Likebox (talk) 16:52, 29 May 2009 (UTC)
I want to say that I am well aware that I am "POV pushing" in this regard, but the nature of Wikipedia requires a certain amount of oppositional debate, so that the consensus can be fair to the entire world of views out there.Likebox (talk) 17:05, 29 May 2009 (UTC)
If the sources are lousy, then the statement doesn't belong here, and there is no need to discuss it. Sources is what WP:V is all about, see also the first pillar.
I am "POV pushing": I deleted the comment because the statements in it are not derived from the reliable literature on the Chinese room. "POV pushing", as you call it, is ok as long as you provide citations and discuss it in a civil and responsive manner. Everybody has their own opinion on the article's subject, but the talk page discusses opinions on the subject only wrt the questions: "Is it in the literature?" and "How prominent is it in the literature?". Regards, Paradoctor (talk) 09:50, 30 May 2009 (UTC)
Bad sources do not mean it doesn't belong. Bad sources means "look for more sources". Many views which are commonly or nearly universally held, like "the Earth is not flat" have bad sourcing, because the view is too obvious to state directly.
Obvious falsehoods, like Searle's argument, need to be called out, so that people will understand why there are so few rebuttals. It's too easy to refute, so that you're mostly on your own. You shouldn't treat this type of argument on the same level as something which actually withstands close scrutiny, like, say, the Turing test.Likebox (talk) 13:04, 30 May 2009 (UTC)
Bad sources means "look for more sources".: Sure, when there is hope that those can be found. But you do not have bad sources for your statement, you have none. I already pointed out to you that Thornley talks about a statement different from yours.
"the view is too obvious to state directly": Spherical Earth
"there are so few rebuttals": Are you yanking my chain? The majority of the literature is critical of Searle's argument. I quoted the article's History section to you in "Searle's assumption", here it is again, for your convenience: "Most of the discussion consists of attempts to refute it.". It has been called out, again and again, and this is reflected in the article. There is nothing unclear about it.
"shouldn't treat this type of argument on the same level": What are you talking about? Deleting the article? Reducing it to a stub? What is your proposal? So far, you have only an unsubstantiated, untenable claim about Searle's assumptions, and the patently wrong idea that there is not enough literature criticizing Searle's argument. Regards, Paradoctor (talk) 13:42, 30 May 2009 (UTC)
What I wanted was only to make sure that the article doesn't give Searle the last word in the rebuttals section. It's not such a big deal. I also think that there might be a way to find sources for a wider context to Searle's argument, as an 80s reaction to 60's positivism.Likebox (talk) 14:20, 30 May 2009 (UTC)
The section's name is Replies, not rebuttals.
"make sure that the article doesn't give Searle the last word": Find sources containing appropriate rejoinders.
"there might be a way to find sources": Yes, it's called literature search. Paradoctor (talk) 14:36, 30 May 2009 (UTC)

Positivism, Skinnerism, Chomsky's response, Searle's Argument

There's a thread of intellectual history which frames Searle's argument well, which I think explains why it is so prominent. Searle's arguement is an attack on a positivist position--- Turing's idea that the existence of a mind can be determined entirely by asking questions and getting answers on a remote typing machine. This idea is now completely obvious, because we often interact with other minds this way, and anyone can spot a bot a mile away. But originally, this was a positivist position. An abstract thing, like a mind, is given meaning from an observational test.

Skinner took this type of positivism one level further, and identified the input/output behavior with the content of the mind. So that in his view you don't need to postulate internal states and transitions, no internal structure. You can just talk about the output from the input directly. This ultra-radical positivist position has a physics analog too, it's like the S-matrix theory. These ultra-positivist positions are at the limits of reason.

The ultra-positivist Skinner position was attacked successfully, and the most prominent attacker for me is Chomsky. Chomsky says that you can conclude that there are certain computational structures inside minds, like a last-in first-out stack, just from looking at the structure of natural language. That's true, and I only know that it's true for the same reason that Chomsky originally gave, because it's true for computer programs. Whenever you see a computer program that processes something that looks like normal language, it has to have a stack inside, to keep track of the verb/subject/object type of each clause as they go flying by.

Skinnerism would say "you can't say there's a stack. That's speculation", just as an S-matrix theorist would say "Quarks are just a figment of the theorists' imagination". These people would deny any internal states, even if these states help to explain input/output relationships, and so would be OK in normal positivism. This position is discredited today.

So into this debate steps Seale. He is writing at the nadir of positivism, in 1980. His argument? Every computer program is just a Skinner machine! All computer programs are input-output objects, and none of them have stuff inside their heads. We have stuff inside out heads, so we can't be programs. Notice that this is strange, because it was the internal structures of computer programs that Chomsky uses to discredit Skinner. His article is written to make a general computer program into a rulebook for input/output behavior, which you reproduce by following the rules and jotting down a few notes.

This argument is a perfect window into the 70s/80s. It was all just one big knee-jerk reaction against positivism. The behaviorists were thrown out of the psychology departments, and the S-matrix theorists were thrown out of physics. So I think this explains why Searle's argument was so popular. But the arguments against positivism in general are not as strong as the arguments against Skinnerism.Likebox (talk) 14:20, 30 May 2009 (UTC)

Sources. Where are the sources? This comment contains nothing we can use without sources. Provide sources! Paradoctor (talk) 14:31, 30 May 2009 (UTC)
This analysis is basically correct. In the 60s, behaviorism was replaced by functionalism and computationalism, and these are the positions Searle is attacking. These positions allow for some internal structure. See also the Harnad reference for this article, which emphasizes the role of "internal" structure. ---- CharlesGillingham (talk) 17:50, 30 May 2009 (UTC)
"This analysis is basically correct.": WP:V means that "readers are able to check that material added to Wikipedia has already been published by a reliable source" (my emphasis). Is that the case here?
"the Harnad reference": How does that help any reader? Regards, Paradoctor (talk) 19:50, 30 May 2009 (UTC)
If I knew what the sources were, I'd provide them. I'm basically fishing for sources. Thanks to CharlesGillingham for pointers.Likebox (talk) 03:19, 31 May 2009 (UTC)
I'm not sure what your point is, Paradoctor. This is a talk page. Likebox's analysis isn't in the article. WP:V applies to article text, not talk page text. As you can plainly see, this article is very carefully referenced and makes extensive use of page-numbered citations. This article has obviously been written with a deep respect for WP:V.
When I said "this is basically correct", I should have said "What you say makes sense to me, although I would take this metahistory one step further to functionalism." I don't think Likebox's essay belongs in the article, and he's not saying he thinks it belongs in the article. I personally think Likebox is trying to understand why an experiment he despises has become so important. I think he's considering an historical explanation, and it seems to me this is a good place to start. I'm suggesting a few more places for him to look -- functionalism, computationalism and Stevan Harnad. We're both trying to deepen our understanding of this article's topic.
With a philosophy article, it's helpful if all the editors of an article actually understand the topic of the article. Sometimes that requires us to discuss the article's topic a bit here, before any of us make an edit. I don't object to Likebox mulling over this topic before contributing. I think it's helpful. Everything else is Wikilawyering. ---- CharlesGillingham (talk) 17:07, 31 May 2009 (UTC)
I give up. It's not worth it. Do whatever you think is right. Paradoctor (talk) 17:41, 31 May 2009 (UTC)
  1. ^ source 1
  2. ^ source 2
  3. ^ source 3
  4. ^ source 4
  5. ^ source 5
  6. ^ source 1, source 2, source 3, source 4
  7. ^ Boullet 1984