Some people think humans are self-aware. I dispute that. You are unaware of, and unable to control, and do not understand, most of what goes on inside you or your brain.
Of course there are good reasons for that which will apply to AI life forms as well. First, there's no advantage to it, which is one reason natural selection didn't arrive at it.
--wrong. The reason natural selection did not provide Lamarckian evolution was the map from genotype to phenotype is so complex as to be essentially uninvertible. So it simply could not be done. It would be hugely advantageous if we could do Lamarckian evolution ("intentional self improvement"). Ask anybody with Tay-Sachs disease or haemophilia. As for self-awareness and self-understanding and self-control; all are key to allow Lamarckian evolution which would provide a huge advantage. Why do humans have a subconscious, and it controls stuff like peristalsis and heart rate, while you cannot consciously (at least I cannot)? Answer to that is control would be too dangerous. The human thinking mind is buggy. You'd have a bug and commit suicide by accident if you had such control and awareness. Too risky. And even your subconscious cannot control your DNA and immune system, which is for technical reasons, but even if your brain could, that would be very risky. It would however have great advantages too, such as cure cancer. Any future robot Lamarckian life might have to involve self-proving code, to avoid bugs, or extensive self testing with backward versioning system every time it makes a modification, or something, to enable it to safely perform these risky moves. In general program-understanding is undecidable problem no matter who does it, but programs that come with proofs can be understood.
There are some disadvantages: additional memory and processing required which would slow down decisions. Second, although the Lucas-Penrose argument doesn't show what they thought, I think it does show that you can't know what program you are (but other people can). --huh? Seems to me if I were a turing machine, I could read any part of my program tape. And having this capability would necessarily not slow down my decisions in the slightest.
On 5/17/2015 7:45 PM, Warren D Smith wrote:
Some people think humans are self-aware. I dispute that. You are unaware of, and unable to control, and do not understand, most of what goes on inside you or your brain. Of course there are good reasons for that which will apply to AI life forms as well. First, there's no advantage to it, which is one reason natural selection didn't arrive at it. --wrong. The reason natural selection did not provide Lamarckian evolution was the map from genotype to phenotype is so complex as to be essentially uninvertible. So it simply could not be done. It would be hugely advantageous if we could do Lamarckian evolution ("intentional self improvement"). Ask anybody with Tay-Sachs disease or haemophilia. As for self-awareness and self-understanding and self-control; all are key to allow Lamarckian evolution which would provide a huge advantage.
But that's different than knowing and controlling what goes on inside of you; that's changing the way you're made. Including your germ cells. And even that's not likely to be possible on the fly.
Why do humans have a subconscious, and it controls stuff like peristalsis and heart rate, while you cannot consciously (at least I cannot)? Answer to that is control would be too dangerous. The human thinking mind is buggy. You'd have a bug and commit suicide by accident if you had such control and awareness.
You can hold your breath - but it's not a popular method of suicide.
Too risky. And even your subconscious cannot control your DNA and immune system, which is for technical reasons, but even if your brain could, that would be very risky. It would however have great advantages too, such as cure cancer.
While you were using all your computing power to analyze DNA strands and T-cell receptors you'd get run over by a car.
Any future robot Lamarckian life might have to involve self-proving code, to avoid bugs, or extensive self testing with backward versioning system every time it makes a modification, or something, to enable it to safely perform these risky moves. In general program-understanding is undecidable problem no matter who does it, but programs that come with proofs can be understood.
You can prove what a program will do with given inputs, but a human like program will need to deal with unforseeable inputs and errors.
There are some disadvantages: additional memory and processing required which would slow down decisions. Second, although the Lucas-Penrose argument doesn't show what they thought, I think it does show that you can't know what program you are (but other people can). --huh? Seems to me if I were a turing machine, I could read any part of my program tape. And having this capability would necessarily not slow down my decisions in the slightest.
Of course a TM will read its tape; that's how it computes. But you must mean reading a record of computation in order to learn from it (what humans call reflection). Are you assuming infinite computational capacity?...So it cost nothing to read a transcript of your past computations. Brent
participants (2)
-
meekerdb -
Warren D Smith