[If people think this should be moving to the philosophy-fun mailing list, let me know, and I'll stop posting. But this post at least relates the discussion to philosophy of mathematics.] On Wed, Mar 25, 2009 at 7:00 PM, Dan Asimov <dasimov@earthlink.net> wrote:
Ever since I learned from Dennett's "Understanding Consciousness" that he does not believe in the existence of qualia (aka conscious experience*), I have seriously wondered whether it's worth the trouble to > read anything else he wrote.
My view is that philosphers often worry too much about ontology (which things "exist" and which don't), when it really isn't a very interesting or important question, and it's just the nature of our language that misleads us into thinking it is. Consider the issue, once a controversial one among mathematicians, of whether complex numbers "really exist" or are just a "useful fiction". With a little work, I can probably manage to come up with two different axiom systems (call them R and C), with the following properties: 1. The values of variables in R are real numbers; that is, the real numbers are a model for R. 2. The values of variables in C are complex numbers; that is, the complex numbers are a model for C. 3. The language R is sufficiently rich to talk not only about numbers, but about such things as ordered pairs of numbers, and the language C is rich enough to say "The imaginary part of this number is 0". 4. There is a simple mechanical way to transform any statement in R into an "equivalent" statement in C, and vice versa (using the ordered pairs for the C->R translation, and the "is real" predicate for the R->C transform), and it can be demonstrated that a statement in one language is provable just if the "equivalent" statement in the other language is provable. It's clear that in axiom system C, complex numbers *do* exist, and in axiom system R, complex numbers *don't* exist. But the question of which axiom system is the "right" one, when they are equally powerful, is a meaningless one; exactly the same things can be said and proved in the world where complex numbers "do exist", and the world in which they "don't exist", so it's not a meaningful or interesting distinction. It's my belief that the same is true in the ontology of the real world; the question of "what things 'really exist', and what things 'don't really exist, and meaningful statements that seem to be about these things are actually about other things that do exist' '" isn't an interesting or useful one. Here's a thought-experiment that makes me sympathetic to Dennett's views on qualia. Consider the classic "inverted spectrum" example that everyone reinvents on their own at some point. Could it be that the color I see as red, you see as green, and vice versa? If you believe in qualia, this is a straightforward and meaningful question; Does "my qualia of red" more closely resemble "your qualia of red" or "your qualia of green"? If this question is actually meaningless, then it casts some doubt in my mind on the usefulness of the concept of "my qualia of green" or qualia more generally (which those who care more about ontology than I do would refer to as "whether qualia 'really exist' "). Suppose I ask the question, not about you and I, but about two AI's, R and G, who, like everyone else, has the inverted spectrum idea occur to them at some point. But when R asks me "So do I have the same qualia as G, or are they inverted?" It occurs to me that I can just read the code and find out. Sure enough, the visual input to R inputs reddish colors as small numbers and greenish colors as large numbers, whil the visual input system of G represents greenish colors as small numbers and reddish colors as large ones. So I am about to tell R that they do in fact have inverted spectra. But I've gotten interested in how the visual systems of these programs work, so I examine it in more detail. The first thing that happens to these input signals is they go through a transform that reduces noise by an averaging mechanism. But R has an inverter as part of this process, so the output of this process, for both R and G, is one where large numbers mean "red". So now I'm about to tell R that their qualia spectra aren't inverted after all. But then I discover that the output of this noise-reduction system is the input to a system that adjusts the colors to remove the effects of the color of ambient light. But this system has an inverter in G and not in R, so the output of this process is one where small numbers mean green to R and red to G. So now I'm about to tell R that their spectra aren't inverted after all. But then I look at the system that processes the output of *this* process, and there's another inverter in G but not in R, and so forth. How is the inverted spectrum question meaningful? Which of these levels is the one that represents the qualia of the programs? I think the answer is "the one that the little homunculous of consciousness is looking at", which isn't meaningful, because consciousness isn't a little guy hiding inside your (or the AI's) brain, watching the world unfold on that screen.
(He later summarized his argument in "Quining qualia", online at < http://ase.tufts.edu/cogstud/papers/quinqual.htm >.)
When I was at Harvard, Dennett gave a talk on this subject, which I believe he gave at Quincy House just so he could give the talk the title "Quining Qualia at Quincy" -- Andy.Latto@pobox.com