I know essentially zero about the current AI/ML hype. I was wondering if there is a trivial example of AI/ML with a very small number of "connections" (or whatever they call them) that is capable of learning some trivial task -- e.g., an AND gate working on 0/1 input data. [I suppose that I could try out some of the current open source ML software on a trivial example like the AND gate to see how difficult this problem is. Perhaps someone here has already done such a "hello world" example? Also, has anyone tried such ML software on the various *integer sequences* ? Not that the ML software would "understand" the sequences, but it might provide some idea of the relative "complexity" of the different sequences.] I'm curious about whether this process can be seen from some perspective as a kind of "scientific method", where some model (which may consist of perhaps 1-4 real or rational numbers) is a "hypothesis" which is "tested" against some training data which then causes some sort of "model refinement". If AI/ML cannot be put into (or forced into) this form, then perhaps it is time to come up with a replacement for the "scientific method" that CAN be put into such an AI/ML form. My point is that if ML really has come far enough to provide a more concrete rationale behind "scientific progress", then perhaps we need to revise what the phrase "scientific progress" should mean.