Ask anybody with Tay-Sachs disease or haemophilia. As for self-awareness and self-understanding and self-control; all are key to allow Lamarckian evolution which would provide a huge advantage.
But that's different than knowing and controlling what goes on inside of you; that's changing the way you're made. Including your germ cells. And even that's not likely to be possible on the fly.
--exactly. For humans. And life as it has been so far. But for future robotoid life, this is no longer necessarily impossible.
Why do humans have a subconscious, and it controls stuff like peristalsis and heart rate, while you cannot consciously (at least I cannot)? Answer to that is control would be too dangerous. The human thinking mind is buggy. You'd have a bug and commit suicide by accident if you had such control and awareness.
You can hold your breath - but it's not a popular method of suicide.
--actually, not. If you try, and manage to reach the point of of unconsciousness, your subconscious brain would take over to save the situation. One could imagine limited conscious control within limits imposed by a subconscious like that; but so far there has never been a thinking being, so this so far has not been a possible design for life. For future life it could be an option.
Too risky. And even your subconscious cannot control your DNA and immune system, which is for technical reasons, but even if your brain could, that would be very risky. It would however have great advantages too, such as cure cancer.
While you were using all your computing power to analyze DNA strands and T-cell receptors you'd get run over by a car.
--again. For past life, this has been the case. For future life with perhaps far greater computing power, this need not be a limitation.
Any future robot Lamarckian life might have to involve self-proving code, to avoid bugs, or extensive self testing with backward versioning system every time it makes a modification, or something, to enable it to safely perform these risky moves. In general program-understanding is undecidable problem no matter who does it, but programs that come with proofs can be understood.
You can prove what a program will do with given inputs, but a human like program will need to deal with unforseeable inputs and errors.
--programs can come with proofs that prove theorems valid about ANY input.
Of course a TM will read its tape; that's how it computes. But you must mean reading a record of computation in order to learn from it (what humans call reflection). Are you assuming infinite computational capacity?...So it cost nothing to read a transcript of your past computations. Brent
--I'm not following whatever you are trying to say here.