Re: [math-fun] Is algebra necessary?
At 10:03 AM 7/30/2012, you wrote:
For what it is worth, I think it would be absolutely absurd to allow getting a CS college degree without knowing calculus.
i think this stuff http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/02... is way more important and useful than calculus. thanks --- co-chair http://ocjug.org/
I'm a great fan of this wonderful book (Graham, Knuth, & Patashnik) myself; but let's just take a look at the publisher's blurb for a minute. << "More concretely," the authors explain, "it is the controlled manipulation of mathematical formulas, using a collection of techniques for solving problems." >> I think that's going to involve heavily what's usually called "algebra". << Major topics include: *Sums *Recurrences *Integer functions *Elementary number theory *Binomial coefficients *Generating functions *Discrete probability *Asymptotic methods >> You will surely make little sense of the final topic --- as well as much of the remainder in practice --- without a good grasp of "calculus". Fred Lunnon On 8/2/12, Ray Tayek <rtayek@ca.rr.com> wrote:
At 10:03 AM 7/30/2012, you wrote:
For what it is worth, I think it would be absolutely absurd to allow getting a CS college degree without knowing calculus.
i think this stuff http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/02...
is way more important and useful than calculus.
thanks
--- co-chair http://ocjug.org/
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Aug 1, 2012, at 11:16 PM, Ray Tayek wrote:
i think this stuff http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/02... is way more important and useful than calculus.
I agree. Not only is the content well chosen, it's extremely well written. An applied mathematician friend of mine at Harvard, who works mostly with differential equations, had this book on his desk when I visited. When I asked what a book on discrete math was doing in his office, he told me he was using it as a model for his own textbook writing effort. -Veit
I think any serious computer scientist should know at least the basics of mathematics that isn't squarely discrete, since after all, computers are often used to compute by approximation all kinds of continuous things. So there's no need for there to be an either/or kind of thing. (Quite a few items in Hakmem would make no sense to someone who's never had calculus.) --Dan On 2012-08-02, at 6:17 AM, Veit Elser wrote: << On Aug 1, 2012, at 11:16 PM, Ray Tayek wrote: << i think this stuff http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/02... is way more important and useful than calculus. I agree. Not only is the content well chosen, it's extremely well written. An applied mathematician friend of mine at Harvard, who works mostly with differential equations, had this book on his desk when I visited. When I asked what a book on discrete math was doing in his office, he told me he was using it as a model for his own textbook writing effort.
Also, the concept of a 'variable' is introduced in algebra, so pretty much all of modern computer science (post-Turing, that is*) relies on algebra. To continue this incessant attack on the education of maths, the way that discrete maths and algorithms are taught leaves a lot to be desired. Basically, it consists almost entirely of following basic algorithms for finding optimal routes in weighted graphs. * Here, 'post-' is the prefix meaning 'after', to avoid confusion with the 'Post-Turing machine' named after Emil Post and Alan Turing. This is, of course, evident from my use of a lower-case 'p' in 'post'. Speaking of Alan Turing, there's an online petition to encourage the Government to grant an official pardon to him. I'm one of those 35K signatures; we need to raise that total to 10^5 before November for it to reach the House of Commons: http://epetitions.direct.gov.uk/petitions/23526 Sincerely, Adam P. Goucher
Actually, Church's lambda calculus (more-or-less the basis of Lisp) tells you much more about the concept of a 'variable' than does algebra. The lambda calculus is essentially the theory of variable substitution. The main axioms: Alpha. The name of a variable is irrelevant, so long as it doesn't clash with other variables. So you can "rename" a variable to your heart's content. Beta. You can substitute the value of a variable in a way that doesn't interfere with the binding of existing variables. This axiom is tricky, because it may require judicious use of axiom alpha in order to avoid name "capture". Eta (I think this is the name?). Tail recursion. You can replace recursion by iteration in certain contexts. The lambda calculus is a far more beautiful theory of computation than Turing Machines. It's an historical accident that the theory of computation is centered around TM's than lambda's. BTW, who was the fellow at MIT in the early 1970's who used Lisp to teach computation theory? He had a very elegant proof of undecidability using a Lisp interpreter. At 07:30 AM 8/2/2012, Adam P. Goucher wrote:
Also, the concept of a 'variable' is introduced in algebra, so pretty much all of modern computer science (post-Turing, that is*) relies on algebra.
On Thu, Aug 2, 2012 at 4:44 PM, Henry Baker <hbaker1@pipeline.com> wrote:
Eta (I think this is the name?). Tail recursion. You can replace recursion by iteration in certain contexts.
Eta is extensionality, i.e. any term T of type X is eta-equivalent to (lambda x.T(x)) when x is not free in T and x has type X. -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
On 8/2/2012 4:44 PM, Henry Baker wrote:
Actually, Church's lambda calculus (more-or-less the basis of Lisp) tells you much more about the concept of a 'variable' than does algebra. The lambda calculus is essentially the theory of variable substitution.
The main axioms:
Alpha. The name of a variable is irrelevant, so long as it doesn't clash with other variables. So you can "rename" a variable to your heart's content.
Beta. You can substitute the value of a variable in a way that doesn't interfere with the binding of existing variables. This axiom is tricky, because it may require judicious use of axiom alpha in order to avoid name "capture".
Eta (I think this is the name?). Tail recursion. You can replace recursion by iteration in certain contexts.
The lambda calculus is a far more beautiful theory of computation than Turing Machines. It's an historical accident that the theory of computation is centered around TM's than lambda's.
BTW, who was the fellow at MIT in the early 1970's who used Lisp to teach computation theory? He had a very elegant proof of undecidability using a Lisp interpreter.
John McCarthy? ... who invented LISP and who died recently. Brent Meeker
At 07:30 AM 8/2/2012, Adam P. Goucher wrote:
Also, the concept of a 'variable' is introduced in algebra, so pretty much all of modern computer science (post-Turing, that is*) relies on algebra.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
On Thu, Aug 2, 2012 at 4:44 PM, Henry Baker <hbaker1@pipeline.com> wrote:
BTW, who was the fellow at MIT in the early 1970's who used Lisp to teach computation theory? He had a very elegant proof of undecidability using a Lisp interpreter.
You probably mean Greg Chaitin. http://www.cs.auckland.ac.nz/~chaitin/lisp.html -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
The lambda calculus is a far more beautiful theory of computation than Turing Machines.
That's a very subjective statement. I don't particularly like the fact that lambda calculus involves giving variables arbitrary names. I much prefer combinatory logic to lambda calculus, which only has two primitive symbols {S = ^xyz.xz(yz), K = ^xy.x}. There are other two-combinator bases as well, where the complexity is distributed differently. Taking this to the extreme, where one of the combinators is the identity function, we have {P = ^vwxyz.wx(wzy), I = ^x.x}.
It's an historical accident that the theory of computation is centered around TM's than lambda's.
Or a matter of practicality? I don't see how one could easily build a physical machine to directly run lambda calculus, whereas a Turing machine is very mechanical in its definition. Sincerely, Adam P. Goucher
Have you actually used a tape machine? I've used 7040's & DECtape (pdp-8)'s. Calculate the moment of inertia of a large tape reel some time. (e.g., take a look at the film reels used in IMAX theaters.) http://en.wikipedia.org/wiki/IMAX Actually, combinators are very roughly stack machines & can be quite practical. I've written about this ("Lively Linear Lisp"). The original Emacs was a set of TECO macros for the excellent TECO system at MIT (much, much better than DEC's TECO). TECO was essentially a 2-stack implementation, where the "cursor" was simultaneously the top of the left stack and the top of the right stack. I'm still working on trying to perfect a really elegant 2-stack set of combinators. At 02:12 AM 8/3/2012, Adam P. Goucher wrote:
Or a matter of practicality? I don't see how one could easily build a physical machine to directly run lambda calculus, whereas a Turing machine is very mechanical in its definition.
On Fri, Aug 3, 2012 at 2:12 AM, Adam P. Goucher <apgoucher@gmx.com> wrote:
The lambda calculus is a far more beautiful theory of computation than Turing Machines.
That's a very subjective statement. I don't particularly like the fact that lambda calculus involves giving variables arbitrary names. I much prefer combinatory logic to lambda calculus, which only has two primitive symbols {S = ^xyz.xz(yz), K = ^xy.x}. There are other two-combinator bases as well, where the complexity is distributed differently. Taking this to the extreme, where one of the combinators is the identity function, we have {P = ^vwxyz.wx(wzy), I = ^x.x}.
There's a simple one-combinator basis U = ^f.fSK I=UU K=U(U(UU)) S=U(U(U(UU))) -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
TECO was essentially a 2-stack implementation, where the "cursor" was simultaneously the top of the left stack and the top of the right stack.
Hmm, we can thus think of the two stacks as being: C, L1, L2, L3, L4, L5, L6, ..., Ln and C, R1, R2, R3, R4, R5, R6, ..., Rm. We can then proceed to identify the 'cursor' in each stack to obtain the following: Ln, ..., L3, L2, L1, C, R1, R2, R3, ..., Rm This seems to be equivalent to an ETM (extensible Turing machine), where the machine can insert and remove cells in the tape as well as doing the ordinary operations.
There's a simple one-combinator basis
U = ^f.fSK
This is the Iota combinator, isn't it? There is a reason why it's not as elegant as the S/K basis. Firstly, we consider expressions to be trees, and beta-reduction is the process of converting one tree to another. Initially, all leaf nodes are U. For example, the S combinator: U(U(U(UU))) Applying beta-reduction to the first term gives us: U(U(UU))SK Suddenly, our tree has *three* different types of leaf nodes. An 'interpreter' for the Iota language (expressions using just U) must therefore 'understand' the operators S and K as well as Iota. By Occam's razor, we may as well just use the S/K language. Sincerely, Adam P. Goucher
At 08:42 AM 8/3/2012, Adam P. Goucher wrote:
TECO was essentially a 2-stack implementation, where the "cursor" was simultaneously the top of the left stack and the top of the right stack.
Hmm, we can thus think of the two stacks as being:
C, L1, L2, L3, L4, L5, L6, ..., Ln
and
C, R1, R2, R3, R4, R5, R6, ..., Rm.
We can then proceed to identify the 'cursor' in each stack to obtain the following:
Ln, ..., L3, L2, L1, C, R1, R2, R3, ..., Rm
This seems to be equivalent to an ETM (extensible Turing machine), where the machine can insert and remove cells in the tape as well as doing the ordinary operations.
Yes, indeed. Although my version would also allow cyclic permutations on the top portions of each stack. I conceive of a machine where the "cache" has the ability to shift items up/down in one step. This can be accomplished either for real, or "virtually", where the cache is addressed differently. Think of a city (Manhattan?) with 2 one-way streets (Madison & Fifth?) & cross-streets. Any traffic on its way back to memory (goes downtown) can be easily shunted via the cross streets into the uptown traffic. The ALU reads directly from this cache & stores directly back to this cache; there is no direct path to memory. (The ALU is off to the side (or above?) these streets & can read & write to/from these streets.) The ALU is at the center of the "collision" between these two stacks. (I seem to recall a paper from about 1984 which sketched out a stack machine similar to this idea, but I haven't been able to remember the author or the location of the paper.) Even sophisticated operations can be implemented on the "top" of this stack -- e.g., a 32 or 64 element FFT that reads & writes the top 32 or 64 elements of the "stack". (I.e., a 32- or 64-element FFT is a single combinator.) The cache management algorithm is a "stack" algorithm, which is pretty darn good.
participants (8)
-
Adam P. Goucher -
Dan Asimov -
Fred lunnon -
Henry Baker -
meekerdb -
Mike Stay -
Ray Tayek -
Veit Elser