A few months ago the problem of memoizing a huge data tree came up and how to keep it using a small memo table. I was referred to the following enlightening article (warning this is all phrased in terms of Haskell -- which I think is a wonderful language -- but that's another discussion). This has been the subject of discussion in lots of the AI literature. I'll see if I can find some other references. http://okmij.org/ftp/Haskell/index.html#memo-off Victor On Sat, Mar 9, 2013 at 9:52 AM, Henry Baker <hbaker1@pipeline.com> wrote:
The term "memoization" in computer science typically refers to the somewhat magical ability of a simple RAM lookup table of previous function results to transform a slow recursive implementation of a function into a fast implementation of the function.
Thus, the following extremely slow Lisp implementation of Fibonacci is suddenly transformed into a high performance implementation by the simple usage of memoization which remembers the results of previous calls to fib so they don't have to be laboriously recomputed.
(defun fib (n) (if (< n 2) n (+ (fib (- n 1)) (fib (- n 2)))))
We now note that it doesn't harm performance "too" much if we sometimes forget a particular result and have to recompute it; this is somewhat akin to a "page fault" in a virtual memory system.
In fact, for many applications, a finite, fixed-size "memo table" is often good enough for government work.
Question: does anyone here have an academic _reference_ for this last statement that a fixed-size memo table is good enough?
I seem to recall reading a short paper (in English) from the 1960's or 1970's which made precisely this point about fixed-size memo tables.
Sadly, Google hasn't been able to find this paper for me, and the Elsevier & other paywalls won't allow me to search some of these old journals.
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun