here is a (presumably hopeless) attempt to define consciousness. it's a cruel world that wants to eat you. consciousness seems to me to be an adaptive darwinian illusion that organizes your ultimately biological responses to threats in full consonance with the laws of physics, and favors the transmission of your genes to future generations. although some debate it, it seems clear to me that animals have something like it, and even insects, or bacteria, or even a finite automaton transducer that keeps a simple state somehow in memory and responds to external stimuli in consonance with physics. and just as they might have an ever more primitive consciousness than ours, it seems just as likely that much "higher" forms could exist, also. On Sunday, August 4, 2013, meekerdb wrote:
On 8/4/2013 6:11 AM, Adam P. Goucher wrote:
It is unlike anything whatsoever that is covered by physics. As far as
physics is concerned (and I am not blaming physics for this), the world could just be a totally insensate machine that follows physical laws but feels nothing.
Could it? Is philosopher's zombie possible? It seems to me unlikely that one could construct, grow, or otherwise have something that looks and acts and is physically like a human being but has no subjective experience.
Making something look like a human is not difficult. Nor is making something `physically like' a human being. The only one of those things that is actually hard to replicate is human behaviour.
We can simulate neural networks on computers, and they're becoming gradually more intelligent as time progresses. For instance, I think they've been able to design electronic circuits and produce art, amongst other things. Together with natural language processing and database accessing (such as Wolfram Alpha), and knowledge acquisition (such as IBM Watson), it wouldn't surprise me if a machine passes the Turing test within the next decade or so.
So you think we could eventually build a robot that exhibited behavior which we would consider human-like and indicative of consciousness - but the robot wouldn't be conscious. What if we replace neurons in your brain, one-by-one, with input-output, functionally identical artificial components so that your behavior was unchanged? Do you think you would gradually lose consciousness?
Also, computers are provably not conscious, since they can be emulated by Turing machines, which are obviously not conscious*.
* If they were conscious, then _everything_ of sufficient complexity would have the capability of consciousness. And as Geoff Xia showed us, the n-body problem enables particles to be projected to infinity in finite time, one corollary of which is Turing-completeness, so `sufficient complexity' means `a few elementary particles'.
Whenever anyone writes "obviously" my B.S. meter quivers. I don't think either of your last two paragraphs is right. First, whether a Turing machine is conscious would depend on what program it is running. Does it have enough self-reflection to prove Godel's incompleteness theorem? Does it create a narrative memory? I'm not sure exactly what program instantiates consciousness, but I very much doubt it's just a matter of "complexity" however that's measured.
Second, "sufficient" and "capable" don't mean "has". Human brains are complex, but so are a lot of things. Brains evolved to be engines of prediction, n-body problems didn't. And in any case there are kinds and levels of consciousness. I think human-like consciousness requires language. But my dogs are conscious in a different way without language and are the koi and crayfish in my pond.
Brent
______________________________**_________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/**cgi-bin/mailman/listinfo/math-**fun<http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun>
-- Thane Plambeck tplambeck@gmail.com http://counterwave.com/