The term "memoization" in computer science typically refers to the somewhat magical ability of a simple RAM lookup table of previous function results to transform a slow recursive implementation of a function into a fast implementation of the function. Thus, the following extremely slow Lisp implementation of Fibonacci is suddenly transformed into a high performance implementation by the simple usage of memoization which remembers the results of previous calls to fib so they don't have to be laboriously recomputed. (defun fib (n) (if (< n 2) n (+ (fib (- n 1)) (fib (- n 2))))) We now note that it doesn't harm performance "too" much if we sometimes forget a particular result and have to recompute it; this is somewhat akin to a "page fault" in a virtual memory system. In fact, for many applications, a finite, fixed-size "memo table" is often good enough for government work. Question: does anyone here have an academic _reference_ for this last statement that a fixed-size memo table is good enough? I seem to recall reading a short paper (in English) from the 1960's or 1970's which made precisely this point about fixed-size memo tables. Sadly, Google hasn't been able to find this paper for me, and the Elsevier & other paywalls won't allow me to search some of these old journals.