Great collection of "creative AI" stories: https://arxiv.org/abs/1803.03453 On Tue, Jun 23, 2020 at 2:04 PM Michael Kleber <michael.kleber@gmail.com> wrote:
A team at Google+Stanford ran into a problem very much like this, with adversarial neural network image transformations. popular account: https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-... arXiv paper: https://arxiv.org/pdf/1712.02950.pdf
--Michael
On Tue, Jun 23, 2020 at 2:59 PM Andres Valloud <ten@smallinteger.com> wrote:
Hi, suppose you're training a neural network via self play. It looks like it's getting stronger. How do you know the versions that get promoted do not also encode, in themselves, by chance, a collaboration mechanism that helps then win?
That is, how do you know the strongest nets do not also help the winning side win when they play the losing side?
How do you know they are not implementing Thompson's compiler hack?
Andres.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Forewarned is worth an octopus in the bush. _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://math.ucr.edu/~mike https://reperiendi.wordpress.com