Just as well I didn't ask Dennis H about Deep Mind and Chess last Friday! The word on the chess networks is that Stockfish 8 was v short of hash-table allowance, and had no opening book or endgame table, whereas Alpha Zero effectively had an opening book 'absorbed' into its Neural Network settings. While Deep Mind are clearly doing impressive, mould-breaking work, it would be good to see this experiment repeated under conditions approved by Stockfish's management and the ICGA (or TCEC) people. My expectation is that, at the same tempo, the contest would be much more even with Stockfish differently set up. Nevertheless, at 'Classic Tempo', I'd still expect Alpha Zero to win. Guy
FWIW, I queried David Silver, lead author of the paper referenced in a recent previous thread, about the seemingly-small hash size. I asked if they'd tried different hash sizes and, if so, what were the results. He hasn't yet replied. Jeff On Fri, Dec 8, 2017 at 2:47 AM, Guy Haworth <g.haworth@reading.ac.uk> wrote:
Just as well I didn't ask Dennis H about Deep Mind and Chess last Friday!
The word on the chess networks is that Stockfish 8 was v short of hash-table allowance, and had no opening book or endgame table, whereas Alpha Zero effectively had an opening book 'absorbed' into its Neural Network settings.
While Deep Mind are clearly doing impressive, mould-breaking work, it would be good to see this experiment repeated under conditions approved by Stockfish's management and the ICGA (or TCEC) people.
My expectation is that, at the same tempo, the contest would be much more even with Stockfish differently set up. Nevertheless, at 'Classic Tempo', I'd still expect Alpha Zero to win.
Guy
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Yeah; I want to be careful and say that I don't expect a significant change in performance if they bump the hash size up; certainly there are diminishing returns. But if they hold up stockfish as the standard to beat, they should probably run it with recommended parameters. I'm amazed; it took them several years to do their Go work, but Chess seemed to just fall out of their AlphaZero work (I can't find previous publications on Chess by DeepMind despite Dennis' history with chess). This is clearly disruption; what's next? -tom On Fri, Dec 8, 2017 at 10:22 PM, Jeff Caldwell <jeffrey.d.caldwell@gmail.com
wrote:
FWIW, I queried David Silver, lead author of the paper referenced in a recent previous thread, about the seemingly-small hash size. I asked if they'd tried different hash sizes and, if so, what were the results. He hasn't yet replied.
Jeff
On Fri, Dec 8, 2017 at 2:47 AM, Guy Haworth <g.haworth@reading.ac.uk> wrote:
Just as well I didn't ask Dennis H about Deep Mind and Chess last Friday!
The word on the chess networks is that Stockfish 8 was v short of hash-table allowance, and had no opening book or endgame table, whereas Alpha Zero effectively had an opening book 'absorbed' into its Neural Network settings.
While Deep Mind are clearly doing impressive, mould-breaking work, it would be good to see this experiment repeated under conditions approved by Stockfish's management and the ICGA (or TCEC) people.
My expectation is that, at the same tempo, the contest would be much more even with Stockfish differently set up. Nevertheless, at 'Classic Tempo', I'd still expect Alpha Zero to win.
Guy
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- http://golly.sf.net/ --
Arimaa is a game (in my opinion, a fairly ugly one, but maybe it's just my opinion that's ugly) invented on purpose to be hard for computers. I see no reason to expect this to be true. Yes, it has a hellish branching factor, but I would fully expect that it would only take AlphaZero a few hours of thinking about it to smash the best human players. However, Arimaa was already beaten by an ordinary computer program two years ago! What about contract bridge? On Sat, Dec 9, 2017 at 1:39 AM, Tomas Rokicki <rokicki@gmail.com> wrote:
Yeah; I want to be careful and say that I don't expect a significant change in performance if they bump the hash size up; certainly there are diminishing returns. But if they hold up stockfish as the standard to beat, they should probably run it with recommended parameters.
I'm amazed; it took them several years to do their Go work, but Chess seemed to just fall out of their AlphaZero work (I can't find previous publications on Chess by DeepMind despite Dennis' history with chess). This is clearly disruption; what's next?
-tom
On Fri, Dec 8, 2017 at 10:22 PM, Jeff Caldwell < jeffrey.d.caldwell@gmail.com
wrote:
FWIW, I queried David Silver, lead author of the paper referenced in a recent previous thread, about the seemingly-small hash size. I asked if they'd tried different hash sizes and, if so, what were the results. He hasn't yet replied.
Jeff
On Fri, Dec 8, 2017 at 2:47 AM, Guy Haworth <g.haworth@reading.ac.uk> wrote:
Just as well I didn't ask Dennis H about Deep Mind and Chess last Friday!
The word on the chess networks is that Stockfish 8 was v short of hash-table allowance, and had no opening book or endgame table, whereas Alpha Zero effectively had an opening book 'absorbed' into its Neural Network settings.
While Deep Mind are clearly doing impressive, mould-breaking work, it would be good to see this experiment repeated under conditions approved by Stockfish's management and the ICGA (or TCEC) people.
My expectation is that, at the same tempo, the contest would be much more even with Stockfish differently set up. Nevertheless, at 'Classic Tempo', I'd still expect Alpha Zero to win.
Guy
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- http://golly.sf.net/ -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
I think you'd need to force it to use a standard bidding system, since the rules forbid bids that convey private information to your partner. During the play of the hand, a similar rule prevents the defenders from having private information about the meaning of their card plays. You are allowed to do things differently, but each partnership must describe anything non-standard on a "confession card" given to the other team. There are also "cryptographic" bids, which I think are forbidden. These convey information from one partner to the other that depend on jointly held state. Even if the method were disclosed to the opponents, the joint information that's unavailable to them is considered unfair. Rich ------------ Quoting Allan Wechsler <acwacw@gmail.com>:
Arimaa is a game (in my opinion, a fairly ugly one, but maybe it's just my opinion that's ugly) invented on purpose to be hard for computers. I see no reason to expect this to be true. Yes, it has a hellish branching factor, but I would fully expect that it would only take AlphaZero a few hours of thinking about it to smash the best human players.
However, Arimaa was already beaten by an ordinary computer program two years ago!
What about contract bridge?
Looking at the paper, the main device seems to be a neural net evaluating to a non-linear scoring function in order to guide a Monte Carlo graph search. With a bit more abstraction, a somewhat general class of graph search problems seems tractable with the same approach provided an effective neural net can be built. The paper seems to suggest the neural nets for chess etc were built by hand. I'd like to believe the relevant folks will want a search for a suitable neural net configuration to be expressed in terms of a graph search, so that their approach is self-applicable recursively. How reasonable is this line of assessment? On 12/8/17 22:39 , Tomas Rokicki wrote:
Yeah; I want to be careful and say that I don't expect a significant change in performance if they bump the hash size up; certainly there are diminishing returns. But if they hold up stockfish as the standard to beat, they should probably run it with recommended parameters.
I'm amazed; it took them several years to do their Go work, but Chess seemed to just fall out of their AlphaZero work (I can't find previous publications on Chess by DeepMind despite Dennis' history with chess). This is clearly disruption; what's next?
-tom
On Fri, Dec 8, 2017 at 10:22 PM, Jeff Caldwell <jeffrey.d.caldwell@gmail.com
wrote:
FWIW, I queried David Silver, lead author of the paper referenced in a recent previous thread, about the seemingly-small hash size. I asked if they'd tried different hash sizes and, if so, what were the results. He hasn't yet replied.
Jeff
On Fri, Dec 8, 2017 at 2:47 AM, Guy Haworth <g.haworth@reading.ac.uk> wrote:
Just as well I didn't ask Dennis H about Deep Mind and Chess last Friday!
The word on the chess networks is that Stockfish 8 was v short of hash-table allowance, and had no opening book or endgame table, whereas Alpha Zero effectively had an opening book 'absorbed' into its Neural Network settings.
While Deep Mind are clearly doing impressive, mould-breaking work, it would be good to see this experiment repeated under conditions approved by Stockfish's management and the ICGA (or TCEC) people.
My expectation is that, at the same tempo, the contest would be much more even with Stockfish differently set up. Nevertheless, at 'Classic Tempo', I'd still expect Alpha Zero to win.
Guy
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Fri, Dec 8, 2017 at 10:22 PM, Jeff Caldwell < jeffrey.d.caldwell@gmail.com
wrote:
FWIW, I queried David Silver, lead author of the paper referenced in a
recent previous thread, about the seemingly-small hash size. I asked if they'd tried different hash sizes and, if so, what were the results. He hasn't yet replied.
Jeff
Thanks for your suggestions. We plan to re-evaluate in the full version of our paper - however our preliminary experiments suggest that AlphaZero still wins by a large margin under the suggested settings. Best wishes Dave
participants (6)
-
Allan Wechsler -
Andres Valloud -
Guy Haworth -
Jeff Caldwell -
rcs@xmission.com -
Tomas Rokicki