Re: [math-fun] Political spectral analysis
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky). —Dan ----- Check out Minsky's (1950's?) work on "Perceptrons". He was trying to show how weak they were at certain decision tasks. -----
True, but it was Minsky who showed they could only produce linear discriminates and so could not perform general classifications as initially thought. Minsky actually set back development of AI because he discouraged the study of neural nets made of perceptrons...which do have general computational power. Brent On 7/4/2018 1:44 PM, Dan Asimov wrote:
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky).
—Dan
----- Check out Minsky's (1950's?) work on "Perceptrons".
He was trying to show how weak they were at certain decision tasks. -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
This is actually an important insight into Minsky's view of AI. He was greatly skeptical of any approach in which the burden of intelligence was carried by a mass of similar elements with a multitude of parameters which an automatic training process adjusts. Many genetic-algorithm approaches as well as pretty much all modern neural-network work was target of this skepticism. His basic criticism, I think, was an esthetic one rather than a mathematical one. It was a question of the ultimate goal of AI research. Minsky hoped that the endeavor would shed light on the nature of intelligence: merely replicating its function wasn't his primary object. A huge neural-net that could play master-level Go, for example, would be vaguely intriguing to him, but not compelling unless somebody went in and analyzed just what the network was doing. Another way to say this is to say that Minsky was searching for objects of interest situated in a hierarchy between intelligent agents and neurons, how these intermediate objects behaved, and how their behavior could combine to produce intelligent behavior. He wanted to be able to assign coherent function to these subunits. The dramatic success of neural networks in the last decade is an enemy of Minsky's research program, because it distracts attention away from questions of how intelligence actually works. On Wed, Jul 4, 2018 at 4:57 PM, Brent Meeker <meekerdb@verizon.net> wrote:
True, but it was Minsky who showed they could only produce linear discriminates and so could not perform general classifications as initially thought. Minsky actually set back development of AI because he discouraged the study of neural nets made of perceptrons...which do have general computational power.
Brent
On 7/4/2018 1:44 PM, Dan Asimov wrote:
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky).
—Dan
----- Check out Minsky's (1950's?) work on "Perceptrons".
He was trying to show how weak they were at certain decision tasks. -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
what if messy large networks are the whole story of how intelligence works? the minsky reductionist program seems like a dead end right now. On Wed, Jul 4, 2018 at 4:45 PM Allan Wechsler <acwacw@gmail.com> wrote:
This is actually an important insight into Minsky's view of AI. He was greatly skeptical of any approach in which the burden of intelligence was carried by a mass of similar elements with a multitude of parameters which an automatic training process adjusts. Many genetic-algorithm approaches as well as pretty much all modern neural-network work was target of this skepticism. His basic criticism, I think, was an esthetic one rather than a mathematical one. It was a question of the ultimate goal of AI research. Minsky hoped that the endeavor would shed light on the nature of intelligence: merely replicating its function wasn't his primary object. A huge neural-net that could play master-level Go, for example, would be vaguely intriguing to him, but not compelling unless somebody went in and analyzed just what the network was doing. Another way to say this is to say that Minsky was searching for objects of interest situated in a hierarchy between intelligent agents and neurons, how these intermediate objects behaved, and how their behavior could combine to produce intelligent behavior. He wanted to be able to assign coherent function to these subunits. The dramatic success of neural networks in the last decade is an enemy of Minsky's research program, because it distracts attention away from questions of how intelligence actually works.
On Wed, Jul 4, 2018 at 4:57 PM, Brent Meeker <meekerdb@verizon.net> wrote:
True, but it was Minsky who showed they could only produce linear discriminates and so could not perform general classifications as initially thought. Minsky actually set back development of AI because he discouraged the study of neural nets made of perceptrons...which do have general computational power.
Brent
On 7/4/2018 1:44 PM, Dan Asimov wrote:
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky).
—Dan
----- Check out Minsky's (1950's?) work on "Perceptrons".
He was trying to show how weak they were at certain decision tasks. -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Minsky was assuming that he knew what intelligence really was, apparently by introspection, and he thought it couldn't be just competence. On the other hand, I'll bet he didn't know where his introspective ideas came from. Brent On 7/4/2018 4:59 PM, Thane Plambeck wrote:
what if messy large networks are the whole story of how intelligence works? the minsky reductionist program seems like a dead end right now. On Wed, Jul 4, 2018 at 4:45 PM Allan Wechsler <acwacw@gmail.com> wrote:
This is actually an important insight into Minsky's view of AI. He was greatly skeptical of any approach in which the burden of intelligence was carried by a mass of similar elements with a multitude of parameters which an automatic training process adjusts. Many genetic-algorithm approaches as well as pretty much all modern neural-network work was target of this skepticism. His basic criticism, I think, was an esthetic one rather than a mathematical one. It was a question of the ultimate goal of AI research. Minsky hoped that the endeavor would shed light on the nature of intelligence: merely replicating its function wasn't his primary object. A huge neural-net that could play master-level Go, for example, would be vaguely intriguing to him, but not compelling unless somebody went in and analyzed just what the network was doing. Another way to say this is to say that Minsky was searching for objects of interest situated in a hierarchy between intelligent agents and neurons, how these intermediate objects behaved, and how their behavior could combine to produce intelligent behavior. He wanted to be able to assign coherent function to these subunits. The dramatic success of neural networks in the last decade is an enemy of Minsky's research program, because it distracts attention away from questions of how intelligence actually works.
On Wed, Jul 4, 2018 at 4:57 PM, Brent Meeker <meekerdb@verizon.net> wrote:
True, but it was Minsky who showed they could only produce linear discriminates and so could not perform general classifications as initially thought. Minsky actually set back development of AI because he discouraged the study of neural nets made of perceptrons...which do have general computational power.
Brent
On 7/4/2018 1:44 PM, Dan Asimov wrote:
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky).
—Dan
----- Check out Minsky's (1950's?) work on "Perceptrons".
He was trying to show how weak they were at certain decision tasks. -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Allan is exactly right about Minsky’s views. That view is, I think, also shared by Chomsky. Both of them thought that symbolic representations were going to be required to move beyond the reactive competence level of brain function. Chomsky has the view that this is the critical neural development of human brains. Winston is trying to push this agenda with his work on story understanding. I believe Leslie Valiant has made a critical observation in this area. He showed that ensembles of neurons can act as a symbol, and that logical operations can be performed on such ensembles, even though the distribution of the nodes and wiring is initially completely random. Memorization and Association on a Realistic Neural Model Leslie G. Valiant valiant@deas.harvard.edu Neural Computation 17, 527–555 (2005)
On Jul 4, 2018, at 8:27 PM, Brent Meeker <meekerdb@verizon.net> wrote:
Minsky was assuming that he knew what intelligence really was, apparently by introspection, and he thought it couldn't be just competence. On the other hand, I'll bet he didn't know where his introspective ideas came from.
Brent
On 7/4/2018 4:59 PM, Thane Plambeck wrote:
what if messy large networks are the whole story of how intelligence works? the minsky reductionist program seems like a dead end right now. On Wed, Jul 4, 2018 at 4:45 PM Allan Wechsler <acwacw@gmail.com> wrote:
This is actually an important insight into Minsky's view of AI. He was greatly skeptical of any approach in which the burden of intelligence was carried by a mass of similar elements with a multitude of parameters which an automatic training process adjusts. Many genetic-algorithm approaches as well as pretty much all modern neural-network work was target of this skepticism. His basic criticism, I think, was an esthetic one rather than a mathematical one. It was a question of the ultimate goal of AI research. Minsky hoped that the endeavor would shed light on the nature of intelligence: merely replicating its function wasn't his primary object. A huge neural-net that could play master-level Go, for example, would be vaguely intriguing to him, but not compelling unless somebody went in and analyzed just what the network was doing. Another way to say this is to say that Minsky was searching for objects of interest situated in a hierarchy between intelligent agents and neurons, how these intermediate objects behaved, and how their behavior could combine to produce intelligent behavior. He wanted to be able to assign coherent function to these subunits. The dramatic success of neural networks in the last decade is an enemy of Minsky's research program, because it distracts attention away from questions of how intelligence actually works.
On Wed, Jul 4, 2018 at 4:57 PM, Brent Meeker <meekerdb@verizon.net> wrote:
True, but it was Minsky who showed they could only produce linear discriminates and so could not perform general classifications as initially thought. Minsky actually set back development of AI because he discouraged the study of neural nets made of perceptrons...which do have general computational power.
Brent
On 7/4/2018 1:44 PM, Dan Asimov wrote:
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky).
—Dan
----- Check out Minsky's (1950's?) work on "Perceptrons".
He was trying to show how weak they were at certain decision tasks. -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
"Don't hand me a dripping brain." --MLM --rwg On 2018-07-04 17:57, Tom Knight wrote:
Allan is exactly right about Minsky’s views. That view is, I think, also shared by Chomsky. Both of them thought that symbolic representations were going to be required to move beyond the reactive competence level of brain function. Chomsky has the view that this is the critical neural development of human brains. Winston is trying to push this agenda with his work on story understanding. I believe Leslie Valiant has made a critical observation in this area. He showed that ensembles of neurons can act as a symbol, and that logical operations can be performed on such ensembles, even though the distribution of the nodes and wiring is initially completely random.
Memorization and Association on a Realistic Neural Model
Leslie G. Valiant
valiant@deas.harvard.edu
Neural Computation 17, 527–555 (2005)
On Jul 4, 2018, at 8:27 PM, Brent Meeker <meekerdb@verizon.net> wrote:
Minsky was assuming that he knew what intelligence really was, apparently by introspection, and he thought it couldn't be just competence. On the other hand, I'll bet he didn't know where his introspective ideas came from.
Brent
On 7/4/2018 4:59 PM, Thane Plambeck wrote:
what if messy large networks are the whole story of how intelligence works? the minsky reductionist program seems like a dead end right now. On Wed, Jul 4, 2018 at 4:45 PM Allan Wechsler <acwacw@gmail.com> wrote:
This is actually an important insight into Minsky's view of AI. He was greatly skeptical of any approach in which the burden of intelligence was carried by a mass of similar elements with a multitude of parameters which an automatic training process adjusts. Many genetic-algorithm approaches as well as pretty much all modern neural-network work was target of this skepticism. His basic criticism, I think, was an esthetic one rather than a mathematical one. It was a question of the ultimate goal of AI research. Minsky hoped that the endeavor would shed light on the nature of intelligence: merely replicating its function wasn't his primary object. A huge neural-net that could play master-level Go, for example, would be vaguely intriguing to him, but not compelling unless somebody went in and analyzed just what the network was doing. Another way to say this is to say that Minsky was searching for objects of interest situated in a hierarchy between intelligent agents and neurons, how these intermediate objects behaved, and how their behavior could combine to produce intelligent behavior. He wanted to be able to assign coherent function to these subunits. The dramatic success of neural networks in the last decade is an enemy of Minsky's research program, because it distracts attention away from questions of how intelligence actually works.
On Wed, Jul 4, 2018 at 4:57 PM, Brent Meeker <meekerdb@verizon.net> wrote:
True, but it was Minsky who showed they could only produce linear discriminates and so could not perform general classifications as initially thought. Minsky actually set back development of AI because he discouraged the study of neural nets made of perceptrons...which do have general computational power.
Brent
On 7/4/2018 1:44 PM, Dan Asimov wrote:
Perceptrons were the 1957 invention of Frank Rosenblatt (not Minsky).
—Dan
----- Check out Minsky's (1950's?) work on "Perceptrons".
He was trying to show how weak they were at certain decision tasks. -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (6)
-
Allan Wechsler -
Brent Meeker -
Dan Asimov -
rwg -
Thane Plambeck -
Tom Knight