It would be nice to have computers that somehow exposed consistent interfaces (as usual), but where the underlying implementations were not so fragile....ie, now, for example, if one computer running a standard software implementation of a network interface is hacked, all other computers running that often-identical software become vulnerable, but if there were some way to ensure implementations always varied in some substantive way (I have no proposal for how it would work), maybe only a few get hacked (and "die"), while the others live on with immunity or evolve somehow, passing on their immunity to other devices. (again, i have no idea how that might work). Hacking is easy because we make machines that are clones on both the interface and implementation level. Few people bother to hack weird machines running odd software. So try to make machines present the same "face" where appropriate, but with varying "minds" behind them. On Sat, May 23, 2015 at 10:32 AM, Mike Stay <metaweta@gmail.com> wrote:
Carl Hewitt (invented actor model of computation) is working on this problem, under the name "inconsistency logic".
On Sat, May 23, 2015 at 10:21 AM, Henry Baker <hbaker1@pipeline.com> wrote:
I am a huge fan of Nassim Nicholas Taleb, who invented the concept of "antifragility":
https://en.wikipedia.org/wiki/Antifragility
"Antifragility is a concept developed by Professor Nassim Nicholas Taleb, and a term he coined in his book, Antifragile. Antifragility refers to systems that increase in capability, resilience, or robustness as a result of mistakes, faults, attacks, or failures. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure)."
"Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster"). Likewise fragility is defined as a concave sensitivity to stressors, leading a negative sensitivity to increase in volatility. The relation between fragility, convexity, and sensitivity to disorder is mathematical, obtained by theorem, not derived from empirical data mining or some historical narrative. It is a priori". ----
Taleb gives some examples of fragility: a porcelin teacup or a human bone; you can walk around all day without damaging your bones, but if you jump down 10 meters onto a hard surface you will certainly break some bones, and possibly die.
Taleb gives some examples of robustness/resilience: something with a more linear response: small insults produce small damage; large insults produce large, but not spectacular damage.
Taleb gives some examples of antifragility: the larger the deviation, the larger the gain -- e.g., the human exercise training effect, where the stressor stimulates a positive outcome. Also, venture capital, which uses optionality & limited liability to create huge gains from a large number of small random bets.
Also Maxwell's Demon:
http://en.wikipedia.org/wiki/Maxwell%27s_demon ----
Mathematical logic/computer programs would seem to be the ultimate "fragile" system -- a single contradiction destroys the entire system: you can then prove anything as well as its negation. Every bit error in a computer program makes the program worse; only a few such errors can guarantee the program's failure with essentially 100% certainty.
Indeed, a number of AI researchers in the 1960's, including Marvin Minsky, searched for a type of logic that wouldn't be so fragile.
However, such a logic wasn't found, and eventually AI gave up on logic entirely, and "modern" AI doesn't care for it at all. ----
But fragile classical logic lives on in the form of computer programs, which can fail spectacularly from a single bug. This problem has become increasingly important for computer security, where a single flaw can open up the entire computer system, and indeed an entire enterprise, to catastrophic compromise.
Dan Geer has been worried about these issues, but not from a mathematical logic perspective:
https://en.wikipedia.org/wiki/Dan_Geer
---- Q: What would an "antifragile" computer look like?
Such a computer would not only _tolerate_ HW&SW failure, it would _thrive_ on it and run even better because of it.
In this sense, such a computer would not only embrace randomness, but feed off it.
Most cryptographic protocols require a source of randomness -- e.g., Linux's /dev/random -- in order to operate, and the security of these protocols requires randomness of the highest quality in order to achieve security.
So perhaps cryptographic protocols can provide us with some inspiration about how randomness & failure can actually become a blessing instead of a curse.
I obviously don't know the answer, but throw this question out for discussion.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Thane Plambeck tplambeck@gmail.com http://counterwave.com/