Ask anybody with Tay-Sachs disease or haemophilia. As for self-awareness and self-understanding and self-control; all are key to allow Lamarckian evolution which would provide a huge advantage.
But that's different than knowing and controlling what goes on inside of you; that's changing the way you're made. Including your germ cells. And even that's not likely to be possible on the fly.
--exactly. For humans. And life as it has been so far. But for future robotoid life, this is no longer necessarily impossible.
Why do humans have a subconscious, and it controls stuff like peristalsis and heart rate, while you cannot consciously (at least I cannot)? Answer to that is control would be too dangerous. The human thinking mind is buggy. You'd have a bug and commit suicide by accident if you had such control and awareness.
You can hold your breath - but it's not a popular method of suicide.
--actually, not. If you try, and manage to reach the point of of unconsciousness, your subconscious brain would take over to save the situation. One could imagine limited conscious control within limits imposed by a subconscious like that; but so far there has never been a thinking being, so this so far has not been a possible design for life. For future life it could be an option.
Too risky. And even your subconscious cannot control your DNA and immune system, which is for technical reasons, but even if your brain could, that would be very risky. It would however have great advantages too, such as cure cancer.
While you were using all your computing power to analyze DNA strands and T-cell receptors you'd get run over by a car.
--again. For past life, this has been the case. For future life with perhaps far greater computing power, this need not be a limitation.
Any future robot Lamarckian life might have to involve self-proving code, to avoid bugs, or extensive self testing with backward versioning system every time it makes a modification, or something, to enable it to safely perform these risky moves. In general program-understanding is undecidable problem no matter who does it, but programs that come with proofs can be understood.
You can prove what a program will do with given inputs, but a human like program will need to deal with unforseeable inputs and errors.
--programs can come with proofs that prove theorems valid about ANY input.
Of course a TM will read its tape; that's how it computes. But you must mean reading a record of computation in order to learn from it (what humans call reflection). Are you assuming infinite computational capacity?...So it cost nothing to read a transcript of your past computations. Brent
--I'm not following whatever you are trying to say here.
On 5/18/2015 12:01 PM, Warren D Smith wrote:
Ask anybody with Tay-Sachs disease or haemophilia. As for self-awareness and self-understanding and self-control; all are key to allow Lamarckian evolution which would provide a huge advantage. But that's different than knowing and controlling what goes on inside of you; that's changing the way you're made. Including your germ cells. And even that's not likely to be possible on the fly. --exactly. For humans. And life as it has been so far. But for future robotoid life, this is no longer necessarily impossible.
Why do humans have a subconscious, and it controls stuff like peristalsis and heart rate, while you cannot consciously (at least I cannot)? Answer to that is control would be too dangerous. The human thinking mind is buggy. You'd have a bug and commit suicide by accident if you had such control and awareness. You can hold your breath - but it's not a popular method of suicide. --actually, not. If you try, and manage to reach the point of of unconsciousness, your subconscious brain would take over to save the situation. And any designer of robots would do the same. Consciousness is only a small part of human thought and whether it is necessary to intelligence or a spandrel is controversial. I think it's necessary to the kind of intelligence that has evolved in animals - but whether it's necessary to all forms of intelligence, including designed ones, is an open question.
One could imagine limited conscious control within limits imposed by a subconscious like that; but so far there has never been a thinking being, so this so far has not been a possible design for life. For future life it could be an option.
Robotic "life" would presumably be able to shut down for indefinitely long periods; thus realizing the "suspended animation" solution to space travel.
Too risky. And even your subconscious cannot control your DNA and immune system, which is for technical reasons, but even if your brain could, that would be very risky. It would however have great advantages too, such as cure cancer. While you were using all your computing power to analyze DNA strands and T-cell receptors you'd get run over by a car. --again. For past life, this has been the case. For future life with perhaps far greater computing power, this need not be a limitation.
But my point is it's a lot easier to avoid cancer (by being manufactured instead of grown, for example) than to analyze and correct the DNA of millions of cells. Computing isn't the solution to everything.
Any future robot Lamarckian life might have to involve self-proving code, to avoid bugs, or extensive self testing with backward versioning system every time it makes a modification, or something, to enable it to safely perform these risky moves. In general program-understanding is undecidable problem no matter who does it, but programs that come with proofs can be understood.
You can prove what a program will do with given inputs, but a human like program will need to deal with unforseeable inputs and errors. --programs can come with proofs that prove theorems valid about ANY input.
But if the the program is rich enough to be intelligent it can't prove it's own consistency. And here Penrose's move to expand the intelligence to include the whole community of mathematicians doesn't solve the problem; it just moves Goedel's incompleteness up to the whole community. Brent
Of course a TM will read its tape; that's how it computes. But you must mean reading a record of computation in order to learn from it (what humans call reflection). Are you assuming infinite computational capacity?...So it cost nothing to read a transcript of your past computations. Brent --I'm not following whatever you are trying to say here.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (2)
-
meekerdb -
Warren D Smith