Re: [math-fun] quantum theory foundational issues, my theory of how they should be resolved
It is unlike anything whatsoever that is covered by physics. As far as physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour. We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so. Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*. * If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'. Sincerely, Adam P. Goucher http://cp4space.wordpress.com
Adam, what is the connection between emulating human behavior (or the vaguely defined Turing test) on the one hand, and Turing completeness on the other? --Dan On 2013-08-04, at 6:11 AM, Adam P. Goucher wrote:
It is unlike anything whatsoever that is covered by physics. As far as physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
On Aug 4, 2013 8:08 AM, "Dan Asimov" <dasimov@earthlink.net> wrote:
Adam, what is the connection between emulating human behavior (or the
vaguely defined Turing test) on the one hand, and Turing completeness on the other? And why should have we care about the Turing test? Humans are notorious for anthropomorphizing everything; it's as though we're programmed to be deluded about that.
--Dan
On 2013-08-04, at 6:11 AM, Adam P. Goucher wrote:
It is unlike anything whatsoever that is covered by physics. As far
as physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Aug 4, 2013 7:13 AM, "Adam P. Goucher" <apgoucher@gmx.com> wrote:
It is unlike anything whatsoever that is covered by physics. As far
as physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely
that one could
construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
It's not obvious to me.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness.
It's called the "dancing pixies problem" in the literature. I frankly don't see why it's a problem: consciousness isn't worth much without memory and the rest of the functions of the brain. We don't think there's much to the experience of being an ant, much less an amoeba or a virus; why then worry about the experience inherent in a randomly chosen physical process?
And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
Sincerely,
Adam P. Goucher
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 4 Aug 2013, at 14:11, Adam P. Goucher wrote:
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
I disagree strongly., why are "Turing machines" obviously not consdcious ? Turing machines *with no method of sensing the outside world* may well be "obviously not conscious" but give the computer the programmed ability to self-evolve and sense the rest of existence - what then ? The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
On 8/4/2013 6:11 AM, Adam P. Goucher wrote:
It is unlike anything whatsoever that is covered by physics. As far as physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing. Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience. Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness?
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running. Does it have enough self-reflection to prove Godel's incompleteness theorem? Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured. Second, "sufficient" and "capable" don't mean "has". Human brains are complex, but so are a lot of things. Brains evolved to be engines of prediction, n-body problems didn't. And in any case there are kinds and levels of consciousness. I think human-like consciousness requires language. But my dogs are conscious in a different way without language and are the koi and crayfish in my pond. Brent
here is a (presumably hopeless) attempt to define consciousness. it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also. On Sunday, August 4, 2013, meekerdb wrote:
On 8/4/2013 6:11 AM, Adam P. Goucher wrote:
It is unlike anything whatsoever that is covered by physics. As far as
physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness?
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running. Does it have enough self-reflection to prove Godel's incompleteness theorem? Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured.
Second, "sufficient" and "capable" don't mean "has". Human brains are complex, but so are a lot of things. Brains evolved to be engines of prediction, n-body problems didn't. And in any case there are kinds and levels of consciousness. I think human-like consciousness requires language. But my dogs are conscious in a different way without language and are the koi and crayfish in my pond.
Brent
______________________________**_________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/**cgi-bin/mailman/listinfo/math-**fun<http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun>
-- Thane Plambeck tplambeck@gmail.com http://counterwave.com/
here is a (presumably hopeless) attempt to define consciousness. it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also. On Sunday, August 4, 2013, meekerdb wrote:
On 8/4/2013 6:11 AM, Adam P. Goucher wrote:
It is unlike anything whatsoever that is covered by physics. As far as
physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness?
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running. Does it have enough self-reflection to prove Godel's incompleteness theorem? Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured.
Second, "sufficient" and "capable" don't mean "has". Human brains are complex, but so are a lot of things. Brains evolved to be engines of prediction, n-body problems didn't. And in any case there are kinds and levels of consciousness. I think human-like consciousness requires language. But my dogs are conscious in a different way without language and are the koi and crayfish in my pond.
Brent
______________________________**_________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/**cgi-bin/mailman/listinfo/math-**fun<http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun>
-- Thane Plambeck tplambeck@gmail.com http://counterwave.com/
When you call consciousness an illusion, the concept of an illusion already presumes that consciousness is a known concept. So this seems circular to me. --Dan On 2013-08-04, at 4:49 PM, Thane Plambeck wrote:
here is a (presumably hopeless) attempt to define consciousness.
it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also.
No - an "illusion" could simply be erroneous effect of data - what the "conscious" thing senses is not necessarily what's there - no presumption required. On 5 Aug 2013, at 01:31, Dan Asimov wrote:
When you call consciousness an illusion, the concept of an illusion already presumes that consciousness is a known concept. So this seems circular to me.
--Dan
On 2013-08-04, at 4:49 PM, Thane Plambeck wrote:
here is a (presumably hopeless) attempt to define consciousness.
it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
What I'm saying is that if illusion means a perception that does not accurately reflect reality, then defining consciousness as a certain type of illusion implies that consciousness is defined to mean a certain type of perception. But (in the sense that interests me) consciousness means perception. So this would be defining perception as a certain type of perception -- hence I see that definition as circular. --Dan On 2013-08-05, at 2:27 AM, David Makin wrote:
No - an "illusion" could simply be erroneous effect of data - what the "conscious" thing senses is not necessarily what's there - no presumption required.
On 5 Aug 2013, at 01:31, Dan Asimov wrote:
When you call consciousness an illusion, the concept of an illusion already presumes that consciousness is a known concept. So this seems circular to me.
--Dan
On 2013-08-04, at 4:49 PM, Thane Plambeck wrote:
here is a (presumably hopeless) attempt to define consciousness.
it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
P.S. For the concept of consciousness that I'm most interested in, there is no way that a conscious experience (a.k.a. an experience) can be an illusion. Because, it is what it is. It can be an illusion only when it is interpreted and is discrepant with some notion of reality. But I'm not thinking of conscious as having an interpretation attached. (That is a perfectly reasonable thing to think about. Just not what I'm thinking of.) (Of course, interpreting one's own conscious experiences is a conscious experience on its own, and likewise, is what it is.) --Dan On 2013-08-04, at 4:49 PM, Thane Plambeck wrote: ----- here is a (presumably hopeless) attempt to define consciousness. it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, . . . -----
Without disputing Dan Asimov's assertion that consciousness cannot be an illusion (by which he means, you can't be mistaken in thinking yourself to be conscious), there is a tenable position that illusion is at the core of what we call consciousness. This position becomes even more plausible if we replace "illusion" by "limitations of perception and knowledge". E.g., our limited understanding of ourselves could be at the core of our illusion of having free will (understood as the belief that two incompatible actions A and B that we can imagine ourselves taking are both actions of which we are capable). Jim Propp On Sunday, August 4, 2013, Dan Asimov <dasimov@earthlink.net> wrote:
P.S. For the concept of consciousness that I'm most interested in, there is no way that a conscious experience (a.k.a. an experience) can be an illusion.
Because, it is what it is. It can be an illusion only when it is interpreted and is discrepant with some notion of reality.
But I'm not thinking of conscious as having an interpretation attached. (That is a perfectly reasonable thing to think about. Just not what I'm thinking of.)
(Of course, interpreting one's own conscious experiences is a conscious experience on its own, and likewise, is what it is.)
--Dan
On 2013-08-04, at 4:49 PM, Thane Plambeck wrote: -----
here is a (presumably hopeless) attempt to define consciousness.
it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, . . . ----- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
That's not what I meant. What I meant by that is: For the definition of consciousness that interests me (conscious awareness, or simply put, experiences), a conscious experience is whatever it is -- irrespective of the extent that it reflects reality. This has nothing to do with the concept of a self. To be clear(er): If you are having a dream, you are having consciousness that simply is what it is. In that respect, the dream is part of absolute reality, just as are any perceptions from the five senses or perceptions originating in one's own mind. (The use of "you" and "one" is to avoid the complications that arise when the self is omitted from such a discussion.) --Dan On 2013-08-04, at 7:56 PM, James Propp wrote:
Dan Asimov's assertion that consciousness cannot be an illusion (by which he means, you can't be mistaken in thinking yourself to be conscious)
Dan, Sorry for the misinterpretation, and thanks for the clarification! Jim On Sunday, August 4, 2013, Dan Asimov <dasimov@earthlink.net> wrote:
That's not what I meant. What I meant by that is: For the definition of consciousness that interests me (conscious awareness, or simply put, experiences), a conscious experience is whatever it is -- irrespective of the extent that it reflects reality.
This has nothing to do with the concept of a self.
To be clear(er): If you are having a dream, you are having consciousness that simply is what it is. In that respect, the dream is part of absolute reality, just as are any perceptions from the five senses or perceptions originating in one's own mind.
(The use of "you" and "one" is to avoid the complications that arise when the self is omitted from such a discussion.)
--Dan
On 2013-08-04, at 7:56 PM, James Propp wrote:
Dan Asimov's assertion that consciousness cannot be an illusion (by which he means, you can't be mistaken in thinking yourself to be conscious)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 8/4/2013 8:17 PM, Dan Asimov wrote:
That's not what I meant. What I meant by that is: For the definition of consciousness that interests me (conscious awareness, or simply put, experiences), a conscious experience is whatever it is -- irrespective of the extent that it reflects reality.
This has nothing to do with the concept of a self.
To be clear(er): If you are having a dream, you are having consciousness that simply is what it is. In that respect, the dream is part of absolute reality, just as are any perceptions from the five senses or perceptions originating in one's own mind.
(The use of "you" and "one" is to avoid the complications that arise when the self is omitted from such a discussion.)
Your brain is making up a model of reality all the time and filling in and predicting it based on perception and memory. Your 'self' is just part of that model. When you're asleep and receiving little or no perceptive input, your brain just runs "open-loop" and creates a model with no anchor to the world. At leas no immediate anchor. I think you have to have learned about the world in order to dream about it. Brent
On 8/4/2013 7:56 PM, James Propp wrote:
Without disputing Dan Asimov's assertion that consciousness cannot be an illusion (by which he means, you can't be mistaken in thinking yourself to be conscious), there is a tenable position that illusion is at the core of what we call consciousness. This position becomes even more plausible if we replace "illusion" by "limitations of perception and knowledge". E.g., our limited understanding of ourselves could be at the core of our illusion of having free will (understood as the belief that two incompatible actions A and B that we can imagine ourselves taking are both actions of which we are capable).
Sure. Daniel Dennett makes that point when challenged to explain how a computer could have free will. He says it's very simple, first you program the computer to make intelligent decisions. And then you program it so that when it's asked how it arrived at a decision it says, "I have no idea. It's just what I wanted to do." Brent
On 5 Aug 2013, at 18:27, meekerdb wrote:
Sure. Daniel Dennett makes that point when challenged to explain how a computer could have free will. He says it's very simple, first you program the computer to make intelligent decisions. And then you program it so that when it's asked how it arrived at a decision it says, "I have no idea. It's just what I wanted to do."
Brent
To me what that gives is the difference between a "conscious" decision and an unconscious one - one can describe at least in some way reasons why one came to a conscious decision or perform a conscious action but not for an unconscious one - for instance I decided to write this to add to the thread, but I can't tell you how I keep my heart beating...or even how I can control my hand so easily to type this....
On 8/5/2013 1:10 PM, David Makin wrote:
On 5 Aug 2013, at 18:27, meekerdb wrote:
Sure. Daniel Dennett makes that point when challenged to explain how a computer could have free will. He says it's very simple, first you program the computer to make intelligent decisions. And then you program it so that when it's asked how it arrived at a decision it says, "I have no idea. It's just what I wanted to do."
Brent
To me what that gives is the difference between a "conscious" decision and an unconscious one - one can describe at least in some way reasons why one came to a conscious decision or perform a conscious action but not for an unconscious one - for instance I decided to write this to add to the thread,
But if you try to push the explanation for your actions back very far, you come to "I just wanted to." I think the point of Dennett's example is that you *could* program the computer to keep a complete record of it's states so that it could answer the question explicitly - but then people wouldn't think it had 'free will'. Brent "Der Mensh Kann wohl tun, was er will, aber er kann nicht wollen, was er will." --- Schopenhauer
but I can't tell you how I keep my heart beating...or even how I can control my hand so easily to type this....
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
----- No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.3392 / Virus Database: 3209/6553 - Release Date: 08/05/13
I never mentioned "free-will", one can talk about that till the end of all things and never prove it either way - I was just indicating that my take on "consciousness" is that *if you know the reason you came to do something* then that is a conscious act/decision, but if you don't them it isn't - free-will is not relevant here. "Because I wanted to" may be a baseline but doesn't explain why what you did follows (as in is a suitable/sensible reaction) from what you were reacting to. If the computer is programmed to know it's own logic then it is conscious, otherwise not - on that basis AFAIK so far no-one has built a conscious computer but I see no reason why you couldn't. On 5 Aug 2013, at 21:43, meekerdb wrote:
On 8/5/2013 1:10 PM, David Makin wrote:
On 5 Aug 2013, at 18:27, meekerdb wrote:
Sure. Daniel Dennett makes that point when challenged to explain how a computer could have free will. He says it's very simple, first you program the computer to make intelligent decisions. And then you program it so that when it's asked how it arrived at a decision it says, "I have no idea. It's just what I wanted to do."
Brent
To me what that gives is the difference between a "conscious" decision and an unconscious one - one can describe at least in some way reasons why one came to a conscious decision or perform a conscious action but not for an unconscious one - for instance I decided to write this to add to the thread,
But if you try to push the explanation for your actions back very far, you come to "I just wanted to." I think the point of Dennett's example is that you *could* program the computer to keep a complete record of it's states so that it could answer the question explicitly - but then people wouldn't think it had 'free will'.
Brent "Der Mensh Kann wohl tun, was er will, aber er kann nicht wollen, was er will." --- Schopenhauer
but I can't tell you how I keep my heart beating...or even how I can control my hand so easily to type this....
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
----- No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.3392 / Virus Database: 3209/6553 - Release Date: 08/05/13
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
On 8/6/2013 8:11 AM, David Makin wrote:
If the computer is programmed to know it's own logic then it is conscious, otherwise not
I'd say by that standard, people aren't conscious. Brent
Really - you don't know why you typed your reply ? Or did you mean "the general public" ? On 6 Aug 2013, at 18:01, meekerdb wrote:
On 8/6/2013 8:11 AM, David Makin wrote:
If the computer is programmed to know it's own logic then it is conscious, otherwise not
I'd say by that standard, people aren't conscious.
Brent _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
I know it came to my mind that one doesn't know one's own logic and I didn't know why that came to my mind, but it seemed like a good idea to express it. Brent On 8/6/2013 10:50 AM, David Makin wrote:
Really - you don't know why you typed your reply ? Or did you mean "the general public" ?
On 6 Aug 2013, at 18:01, meekerdb wrote:
On 8/6/2013 8:11 AM, David Makin wrote:
If the computer is programmed to know it's own logic then it is conscious, otherwise not I'd say by that standard, people aren't conscious.
Brent _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
----- No virus found in this message. Checked by AVG - www.avg.com Version: 2013.0.3392 / Virus Database: 3209/6555 - Release Date: 08/06/13
Unless you are a philosopher, the difference between "free will" and "determined by causes I can't perceive" is meaningless. Similarly, in games, the difference between randomness and deterministic chaos is unimportant.
Unless you are a philosopher, the difference between "free will" and "determined by causes I can't perceive" is meaningless. Similarly, in games, the difference between randomness and deterministic chaos is unimportant.
Similarly, in games, the difference between randomness and deterministic chaos is unimportant.
Only if you're talking about pseudo-randoms f(rom a fixed seed or seeds) - use real randoms (e.g. radioactive decay) then the game will change each time it's played even with the same initial conditions. On 6 Aug 2013, at 18:07, Dave Dyer wrote:
Unless you are a philosopher, the difference between "free will" and "determined by causes I can't perceive" is meaningless.
Similarly, in games, the difference between randomness and deterministic chaos is unimportant.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The meaning and purpose of life is to give life purpose and meaning. The instigation of violence indicates a lack of spirituality.
These two things ("free will" and "determined by causes I can't perceive") are not almost the same, but instead are almost opposites. Of course, "determined by causes I can perceive" would also exclude free will. Ultimately, "free will" per se is precisely the opposite of "determinism". Speaking of determinism, can someone please clarify a confusion I've had about the consequences of quantum mechanics: 1) I used to believe in determinism, but then learned that QM implied some things happened ultimately by chance with no underlying mechanism. 2) Then I believed there must be hidden variables until I heard of Bell's Theorem, which is said to prove the non-existence of hidden variables in QM. 3) Then I heard that Bell's Theorem is valid only if non-locality is excluded. QUESTION: Does QM exclude the possibility of determinism? Is some kind of non-locality consistent with known physics? --Dan On 2013-08-06, at 10:07 AM, Dave Dyer wrote:
Unless you are a philosopher, the difference between "free will" and "determined by causes I can't perceive" is meaningless.
On 8/6/2013 11:04 AM, Dan Asimov wrote:
These two things ("free will" and "determined by causes I can't perceive") are not almost the same, but instead are almost opposites. Of course, "determined by causes I can perceive" would also exclude free will. Ultimately, "free will" per se is precisely the opposite of "determinism".
Speaking of determinism, can someone please clarify a confusion I've had about the consequences of quantum mechanics:
1) I used to believe in determinism, but then learned that QM implied some things happened ultimately by chance with no underlying mechanism.
2) Then I believed there must be hidden variables until I heard of Bell's Theorem, which is said to prove the non-existence of hidden variables in QM.
3) Then I heard that Bell's Theorem is valid only if non-locality is excluded.
QUESTION: Does QM exclude the possibility of determinism? Is some kind of non-locality consistent with known physics?
Sure. Everett's interpretation of QM is a consistent, non-local deterministic theory. So is Bohm's QM, although it has problems with relativity. Brent
On Tue, Aug 6, 2013 at 12:04 PM, Dan Asimov <dasimov@earthlink.net> wrote:
These two things ("free will" and "determined by causes I can't perceive") are not almost the same, but instead are almost opposites. Of course, "determined by causes I can perceive" would also exclude free will. Ultimately, "free will" per se is precisely the opposite of "determinism".
That's a "libertarian" view of free will; there's also the "compatibilist" view: http://en.wikipedia.org/wiki/Compatibilism
Speaking of determinism, can someone please clarify a confusion I've had about the consequences of quantum mechanics:
1) I used to believe in determinism, but then learned that QM implied some things happened ultimately by chance with no underlying mechanism.
The mathematical content of quantum mechanics is entirely deterministic. There are various interpretations of that content; the Copenhagen interpretation asserts that the wave function collapses when it is "observed" (whatever that means), and that collapse is instantaneous and random. Penrose says that the wavefunction collapse is deterministic but uncomputable. The many worlds view says there's no collapse and quantum randomness is an illusion due to postselection. Bohmians follow deBroglie and say there are real point particles guided by pilot waves; the dynamics of this system are entirely deterministic but the evolution of the pilot wave is nonlocal.
2) Then I believed there must be hidden variables until I heard of Bell's Theorem, which is said to prove the non-existence of hidden variables in QM.
3) Then I heard that Bell's Theorem is valid only if non-locality is excluded.
Right; Bell himself became a Bohmian.
QUESTION: Does QM exclude the possibility of determinism?
No, only certain interpretations.
Is some kind of non-locality consistent with known physics?
As above, Bohm's pilot wave is nonlocal, but it only works with Schroedinger's equation. It has trouble with relativity because the number of particles you see depends on your acceleration. There can be nonlocal correlations, but information has to travel at the speed of light. You can't compare two correlated systems without bringing them together, and that "bringing together" action is restricted to c or less.
--Dan
On 2013-08-06, at 10:07 AM, Dave Dyer wrote:
Unless you are a philosopher, the difference between "free will" and "determined by causes I can't perceive" is meaningless.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
There's a lot of research that indicates that what you think is a conscious decision is actually an unconscious one with after-the-fact justification by your conscious mind. BTW, this whole discussion seems to be philosophy rather than math or science. Without an objective measure of consciousness it's just fun. --ms On 2013-08-05 16:10, David Makin wrote:
On 5 Aug 2013, at 18:27, meekerdb wrote:
Sure. Daniel Dennett makes that point when challenged to explain how a computer could have free will. He says it's very simple, first you program the computer to make intelligent decisions. And then you program it so that when it's asked how it arrived at a decision it says, "I have no idea. It's just what I wanted to do."
Brent
To me what that gives is the difference between a "conscious" decision and an unconscious one - one can describe at least in some way reasons why one came to a conscious decision or perform a conscious action but not for an unconscious one - for instance I decided to write this to add to the thread, but I can't tell you how I keep my heart beating...or even how I can control my hand so easily to type this....
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On 8/4/2013 4:49 PM, Thane Plambeck wrote:
here is a (presumably hopeless) attempt to define consciousness.
it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also.
That strikes me as a definition of intelligence. I think intelligence and consciousness are related but not identical. Human like intelligence, symbolic reasoning, is related to language which in turn is related to being a social animal and being able to manipulate objects. So, for example, a solitary predator like a tiger might gain a lot by intelligence, but not human like intelligence. Brent
participants (9)
-
Adam P. Goucher -
Dan Asimov -
Dave Dyer -
David Makin -
James Propp -
meekerdb -
Mike Speciner -
Mike Stay -
Thane Plambeck