[math-fun] "Antifragility" and mathematical logic
I am a huge fan of Nassim Nicholas Taleb, who invented the concept of "antifragility": https://en.wikipedia.org/wiki/Antifragility "Antifragility is a concept developed by Professor Nassim Nicholas Taleb, and a term he coined in his book, Antifragile. Antifragility refers to systems that increase in capability, resilience, or robustness as a result of mistakes, faults, attacks, or failures. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure)." "Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster"). Likewise fragility is defined as a concave sensitivity to stressors, leading a negative sensitivity to increase in volatility. The relation between fragility, convexity, and sensitivity to disorder is mathematical, obtained by theorem, not derived from empirical data mining or some historical narrative. It is a priori". ---- Taleb gives some examples of fragility: a porcelin teacup or a human bone; you can walk around all day without damaging your bones, but if you jump down 10 meters onto a hard surface you will certainly break some bones, and possibly die. Taleb gives some examples of robustness/resilience: something with a more linear response: small insults produce small damage; large insults produce large, but not spectacular damage. Taleb gives some examples of antifragility: the larger the deviation, the larger the gain -- e.g., the human exercise training effect, where the stressor stimulates a positive outcome. Also, venture capital, which uses optionality & limited liability to create huge gains from a large number of small random bets. Also Maxwell's Demon: http://en.wikipedia.org/wiki/Maxwell%27s_demon ---- Mathematical logic/computer programs would seem to be the ultimate "fragile" system -- a single contradiction destroys the entire system: you can then prove anything as well as its negation. Every bit error in a computer program makes the program worse; only a few such errors can guarantee the program's failure with essentially 100% certainty. Indeed, a number of AI researchers in the 1960's, including Marvin Minsky, searched for a type of logic that wouldn't be so fragile. However, such a logic wasn't found, and eventually AI gave up on logic entirely, and "modern" AI doesn't care for it at all. ---- But fragile classical logic lives on in the form of computer programs, which can fail spectacularly from a single bug. This problem has become increasingly important for computer security, where a single flaw can open up the entire computer system, and indeed an entire enterprise, to catastrophic compromise. Dan Geer has been worried about these issues, but not from a mathematical logic perspective: https://en.wikipedia.org/wiki/Dan_Geer http://geer.tinho.net/pubs ---- Q: What would an "antifragile" computer look like? Such a computer would not only _tolerate_ HW&SW failure, it would _thrive_ on it and run even better because of it. In this sense, such a computer would not only embrace randomness, but feed off it. Most cryptographic protocols require a source of randomness -- e.g., Linux's /dev/random -- in order to operate, and the security of these protocols requires randomness of the highest quality in order to achieve security. So perhaps cryptographic protocols can provide us with some inspiration about how randomness & failure can actually become a blessing instead of a curse. I obviously don't know the answer, but throw this question out for discussion.
Carl Hewitt (invented actor model of computation) is working on this problem, under the name "inconsistency logic". On Sat, May 23, 2015 at 10:21 AM, Henry Baker <hbaker1@pipeline.com> wrote:
I am a huge fan of Nassim Nicholas Taleb, who invented the concept of "antifragility":
https://en.wikipedia.org/wiki/Antifragility
"Antifragility is a concept developed by Professor Nassim Nicholas Taleb, and a term he coined in his book, Antifragile. Antifragility refers to systems that increase in capability, resilience, or robustness as a result of mistakes, faults, attacks, or failures. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure)."
"Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster"). Likewise fragility is defined as a concave sensitivity to stressors, leading a negative sensitivity to increase in volatility. The relation between fragility, convexity, and sensitivity to disorder is mathematical, obtained by theorem, not derived from empirical data mining or some historical narrative. It is a priori". ----
Taleb gives some examples of fragility: a porcelin teacup or a human bone; you can walk around all day without damaging your bones, but if you jump down 10 meters onto a hard surface you will certainly break some bones, and possibly die.
Taleb gives some examples of robustness/resilience: something with a more linear response: small insults produce small damage; large insults produce large, but not spectacular damage.
Taleb gives some examples of antifragility: the larger the deviation, the larger the gain -- e.g., the human exercise training effect, where the stressor stimulates a positive outcome. Also, venture capital, which uses optionality & limited liability to create huge gains from a large number of small random bets.
Also Maxwell's Demon:
http://en.wikipedia.org/wiki/Maxwell%27s_demon ----
Mathematical logic/computer programs would seem to be the ultimate "fragile" system -- a single contradiction destroys the entire system: you can then prove anything as well as its negation. Every bit error in a computer program makes the program worse; only a few such errors can guarantee the program's failure with essentially 100% certainty.
Indeed, a number of AI researchers in the 1960's, including Marvin Minsky, searched for a type of logic that wouldn't be so fragile.
However, such a logic wasn't found, and eventually AI gave up on logic entirely, and "modern" AI doesn't care for it at all. ----
But fragile classical logic lives on in the form of computer programs, which can fail spectacularly from a single bug. This problem has become increasingly important for computer security, where a single flaw can open up the entire computer system, and indeed an entire enterprise, to catastrophic compromise.
Dan Geer has been worried about these issues, but not from a mathematical logic perspective:
https://en.wikipedia.org/wiki/Dan_Geer
---- Q: What would an "antifragile" computer look like?
Such a computer would not only _tolerate_ HW&SW failure, it would _thrive_ on it and run even better because of it.
In this sense, such a computer would not only embrace randomness, but feed off it.
Most cryptographic protocols require a source of randomness -- e.g., Linux's /dev/random -- in order to operate, and the security of these protocols requires randomness of the highest quality in order to achieve security.
So perhaps cryptographic protocols can provide us with some inspiration about how randomness & failure can actually become a blessing instead of a curse.
I obviously don't know the answer, but throw this question out for discussion.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
It would be nice to have computers that somehow exposed consistent interfaces (as usual), but where the underlying implementations were not so fragile....ie, now, for example, if one computer running a standard software implementation of a network interface is hacked, all other computers running that often-identical software become vulnerable, but if there were some way to ensure implementations always varied in some substantive way (I have no proposal for how it would work), maybe only a few get hacked (and "die"), while the others live on with immunity or evolve somehow, passing on their immunity to other devices. (again, i have no idea how that might work). Hacking is easy because we make machines that are clones on both the interface and implementation level. Few people bother to hack weird machines running odd software. So try to make machines present the same "face" where appropriate, but with varying "minds" behind them. On Sat, May 23, 2015 at 10:32 AM, Mike Stay <metaweta@gmail.com> wrote:
Carl Hewitt (invented actor model of computation) is working on this problem, under the name "inconsistency logic".
On Sat, May 23, 2015 at 10:21 AM, Henry Baker <hbaker1@pipeline.com> wrote:
I am a huge fan of Nassim Nicholas Taleb, who invented the concept of "antifragility":
https://en.wikipedia.org/wiki/Antifragility
"Antifragility is a concept developed by Professor Nassim Nicholas Taleb, and a term he coined in his book, Antifragile. Antifragility refers to systems that increase in capability, resilience, or robustness as a result of mistakes, faults, attacks, or failures. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure)."
"Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster"). Likewise fragility is defined as a concave sensitivity to stressors, leading a negative sensitivity to increase in volatility. The relation between fragility, convexity, and sensitivity to disorder is mathematical, obtained by theorem, not derived from empirical data mining or some historical narrative. It is a priori". ----
Taleb gives some examples of fragility: a porcelin teacup or a human bone; you can walk around all day without damaging your bones, but if you jump down 10 meters onto a hard surface you will certainly break some bones, and possibly die.
Taleb gives some examples of robustness/resilience: something with a more linear response: small insults produce small damage; large insults produce large, but not spectacular damage.
Taleb gives some examples of antifragility: the larger the deviation, the larger the gain -- e.g., the human exercise training effect, where the stressor stimulates a positive outcome. Also, venture capital, which uses optionality & limited liability to create huge gains from a large number of small random bets.
Also Maxwell's Demon:
http://en.wikipedia.org/wiki/Maxwell%27s_demon ----
Mathematical logic/computer programs would seem to be the ultimate "fragile" system -- a single contradiction destroys the entire system: you can then prove anything as well as its negation. Every bit error in a computer program makes the program worse; only a few such errors can guarantee the program's failure with essentially 100% certainty.
Indeed, a number of AI researchers in the 1960's, including Marvin Minsky, searched for a type of logic that wouldn't be so fragile.
However, such a logic wasn't found, and eventually AI gave up on logic entirely, and "modern" AI doesn't care for it at all. ----
But fragile classical logic lives on in the form of computer programs, which can fail spectacularly from a single bug. This problem has become increasingly important for computer security, where a single flaw can open up the entire computer system, and indeed an entire enterprise, to catastrophic compromise.
Dan Geer has been worried about these issues, but not from a mathematical logic perspective:
https://en.wikipedia.org/wiki/Dan_Geer
---- Q: What would an "antifragile" computer look like?
Such a computer would not only _tolerate_ HW&SW failure, it would _thrive_ on it and run even better because of it.
In this sense, such a computer would not only embrace randomness, but feed off it.
Most cryptographic protocols require a source of randomness -- e.g., Linux's /dev/random -- in order to operate, and the security of these protocols requires randomness of the highest quality in order to achieve security.
So perhaps cryptographic protocols can provide us with some inspiration about how randomness & failure can actually become a blessing instead of a curse.
I obviously don't know the answer, but throw this question out for discussion.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Thane Plambeck tplambeck@gmail.com http://counterwave.com/
An AI that is truly intelligent has to be able to make mistakes in order to learn. So there's some optimal (relative to the problems) level of mistake making that's not zero. Is that all "anti-fragility" means? Brent On 5/23/2015 10:21 AM, Henry Baker wrote:
I am a huge fan of Nassim Nicholas Taleb, who invented the concept of "antifragility":
https://en.wikipedia.org/wiki/Antifragility
"Antifragility is a concept developed by Professor Nassim Nicholas Taleb, and a term he coined in his book, Antifragile. Antifragility refers to systems that increase in capability, resilience, or robustness as a result of mistakes, faults, attacks, or failures. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure)."
"Simply, antifragility is defined as a convex response to a stressor or source of harm (for some range of variation), leading to a positive sensitivity to increase in volatility (or variability, stress, dispersion of outcomes, or uncertainty, what is grouped under the designation "disorder cluster"). Likewise fragility is defined as a concave sensitivity to stressors, leading a negative sensitivity to increase in volatility. The relation between fragility, convexity, and sensitivity to disorder is mathematical, obtained by theorem, not derived from empirical data mining or some historical narrative. It is a priori". ----
Taleb gives some examples of fragility: a porcelin teacup or a human bone; you can walk around all day without damaging your bones, but if you jump down 10 meters onto a hard surface you will certainly break some bones, and possibly die.
Taleb gives some examples of robustness/resilience: something with a more linear response: small insults produce small damage; large insults produce large, but not spectacular damage.
Taleb gives some examples of antifragility: the larger the deviation, the larger the gain -- e.g., the human exercise training effect, where the stressor stimulates a positive outcome. Also, venture capital, which uses optionality & limited liability to create huge gains from a large number of small random bets.
Also Maxwell's Demon:
http://en.wikipedia.org/wiki/Maxwell%27s_demon ----
Mathematical logic/computer programs would seem to be the ultimate "fragile" system -- a single contradiction destroys the entire system: you can then prove anything as well as its negation. Every bit error in a computer program makes the program worse; only a few such errors can guarantee the program's failure with essentially 100% certainty.
Indeed, a number of AI researchers in the 1960's, including Marvin Minsky, searched for a type of logic that wouldn't be so fragile.
However, such a logic wasn't found, and eventually AI gave up on logic entirely, and "modern" AI doesn't care for it at all. ----
But fragile classical logic lives on in the form of computer programs, which can fail spectacularly from a single bug. This problem has become increasingly important for computer security, where a single flaw can open up the entire computer system, and indeed an entire enterprise, to catastrophic compromise.
Dan Geer has been worried about these issues, but not from a mathematical logic perspective:
https://en.wikipedia.org/wiki/Dan_Geer
---- Q: What would an "antifragile" computer look like?
Such a computer would not only _tolerate_ HW&SW failure, it would _thrive_ on it and run even better because of it.
In this sense, such a computer would not only embrace randomness, but feed off it.
Most cryptographic protocols require a source of randomness -- e.g., Linux's /dev/random -- in order to operate, and the security of these protocols requires randomness of the highest quality in order to achieve security.
So perhaps cryptographic protocols can provide us with some inspiration about how randomness & failure can actually become a blessing instead of a curse.
I obviously don't know the answer, but throw this question out for discussion.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
At 02:06 PM 5/23/2015, meekerdb wrote:
An AI that is truly intelligent has to be able to make mistakes in order to learn. So there's some optimal (relative to the problems) level of mistake making that's not zero. Is that all "anti-fragility" means?
Not exactly. Recall that Taleb started life as a Wall Street trader. He sees the world as a series of bets. If you are "long", then you win if the market goes up. If you are "short", then you win if the market goes down. If you are perfectly hedged, then you don't win or lose, but you pay transaction costs. If you own both a put option & a call option on the same security, then you win if the security has lots of volatility, because you gain much more from the one that increases in value than you lose from the one that decreases in value. Note that this is the exact opposite of a "collar", which limits both your upside gains & your downside losses. https://en.wikipedia.org/wiki/Collar_%28finance%29 Taleb likes this last (anti-collar) position, because it wins when the world is chaotic, which is enough of the time. Even though he might go through long periods of _apparent_ calm, the "long-tailed" nature of particularly bad or particularly good events ensures that such a trader will eventually get back all of his losses & trading costs with enormous wins. Taleb mentions the "long-tail" life of a Wall Street securities trader. A trader can make good money for 7 years straight, and lose it all (and more) in 7 seconds. (The Bible didn't understand long-tailed distributions!) Ditto in reverse, except that almost no traders can survive & keep their jobs for 7 bad years to wait for those 7 good seconds. This is why a put+call position is "antifragile"; it thrives on volatility -- especially long-tailed volatility.
On 5/23/2015 2:30 PM, Henry Baker wrote:
At 02:06 PM 5/23/2015, meekerdb wrote:
An AI that is truly intelligent has to be able to make mistakes in order to learn. So there's some optimal (relative to the problems) level of mistake making that's not zero. Is that all "anti-fragility" means? Not exactly.
Recall that Taleb started life as a Wall Street trader. He sees the world as a series of bets.
If you are "long", then you win if the market goes up.
If you are "short", then you win if the market goes down.
If you are perfectly hedged, then you don't win or lose, but you pay transaction costs.
If you own both a put option & a call option on the same security, then you win if the security has lots of volatility, because you gain much more from the one that increases in value than you lose from the one that decreases in value.
Note that this is the exact opposite of a "collar", which limits both your upside gains & your downside losses.
https://en.wikipedia.org/wiki/Collar_%28finance%29
Taleb likes this last (anti-collar) position, because it wins when the world is chaotic, which is enough of the time. Even though he might go through long periods of _apparent_ calm, the "long-tailed" nature of particularly bad or particularly good events ensures that such a trader will eventually get back all of his losses & trading costs with enormous wins.
Taleb mentions the "long-tail" life of a Wall Street securities trader. A trader can make good money for 7 years straight, and lose it all (and more) in 7 seconds. (The Bible didn't understand long-tailed distributions!) Ditto in reverse, except that almost no traders can survive & keep their jobs for 7 bad years to wait for those 7 good seconds.
This is why a put+call position is "antifragile"; it thrives on volatility -- especially long-tailed volatility.
But isn't that because prices can only drop to zero but have no upper bound? Brent
At 02:40 PM 5/23/2015, meekerdb wrote:
But isn't that because prices can only drop to zero but have no upper bound?
Yes, because _option_ prices can only drop to zero but have no upper bound. Options on securities that allowed negative prices -- e.g., lawsuits and/or going to jail -- would still work.
participants (4)
-
Henry Baker -
meekerdb -
Mike Stay -
Thane Plambeck