At 02:06 PM 5/23/2015, meekerdb wrote:
An AI that is truly intelligent has to be able to make mistakes in order to learn. So there's some optimal (relative to the problems) level of mistake making that's not zero. Is that all "anti-fragility" means?
Not exactly. Recall that Taleb started life as a Wall Street trader. He sees the world as a series of bets. If you are "long", then you win if the market goes up. If you are "short", then you win if the market goes down. If you are perfectly hedged, then you don't win or lose, but you pay transaction costs. If you own both a put option & a call option on the same security, then you win if the security has lots of volatility, because you gain much more from the one that increases in value than you lose from the one that decreases in value. Note that this is the exact opposite of a "collar", which limits both your upside gains & your downside losses. https://en.wikipedia.org/wiki/Collar_%28finance%29 Taleb likes this last (anti-collar) position, because it wins when the world is chaotic, which is enough of the time. Even though he might go through long periods of _apparent_ calm, the "long-tailed" nature of particularly bad or particularly good events ensures that such a trader will eventually get back all of his losses & trading costs with enormous wins. Taleb mentions the "long-tail" life of a Wall Street securities trader. A trader can make good money for 7 years straight, and lose it all (and more) in 7 seconds. (The Bible didn't understand long-tailed distributions!) Ditto in reverse, except that almost no traders can survive & keep their jobs for 7 bad years to wait for those 7 good seconds. This is why a put+call position is "antifragile"; it thrives on volatility -- especially long-tailed volatility.