"Iterators" -- at least as currently envisioned -- is one of the worst ideas to ever hit computer science. The biggest problem with iterators is that they are _stateful_, which means that formal reasoning about them is at least 10x harder than reasoning about _stateless_ constructs like recursion. The earliest use of iterators that I can find was in Doug Ross's "AED-0" (Algol Extended for Design) language. Briefly, AED's "strings" (AED name for abstract sequences) allowed for looping/iterating over various types of sequences in an sequence-independent way. AED's iterators used callback mechanisms to allow what was then known as "co-routining", but is now simply called "call-backs". AED's implementation of strings was quite similar to C++, in that each type of string had a vector of subroutine addresses to string operations. Recursion, on the other hand, keeps all of its "state" in its arguments, even when that state is in the form of an abstract data type. If your compiler understands _tail recursion_, it can trivially convert simple tail-recursive loops into traditional C/Fortran loops, so there is NO overhead for expressing yourself recursively instead of iteratively. The major problem with doing tail recursions these days is the exceedingly bad idea of forcing the compiler to keep the useless stack frames around to support a very heavyweight exception mechanism in languages like C and even Common Lisp. (This heavyweight exception mechanism for C is now the target of hacking, because the gcc exception handler has a _Turing-complete_ programming language for walking the stack ! Hackers can easily take over this mechanism to do dastardly deeds.) Iterators can be trivially emulated using function _closures_. Since function closures are far more interesting than iterators, it was silly to waste all of that research horsepower for 25-30 years on iterators, instead of going directly to the big hammer: function closures. My entire 50-year career in computer science has been spent scratching my head about the silliness involved in the design of programming languages. Every computer language starts out as "simple", but is forced to add certain features in order to support reasonable programming. Fortran didn't think recursion was necessary. Oops! C didn't implement even "downward" closures, because the Bell Labs folks thought that Algol & PL/I's lexical scoping were too complicated. Oops! C/C++ didn't implement automatic garbage collection because they thought that the programmer knew better. We now know that most programmers haven't a clue about how hard memory management is, and we've now spent in excess of $1 trillion on memory management-related bugs, including the entire computer security field. Fortran, C/C++, etc., didn't bother with array bounds checking. Perhaps 90% of software bugs have to do with array bounds violations. (I may have been the only person to actually utilize PL/I's array bounds checking code; it found innumerable bugs.) Oops! The more you study programming languages, the more you appreciate the power and elegance of the lambda-calculus. Entire decades of computer science research has gone into the understanding of the power inherent in this extremely elegant system: * lexical scoping of variables * function closures * partial application of functions * lazy evaluation of argument expressions * arbitrary recursion via the Y combinator * tail recursion via the eta rule About the only feature of the lambda calculus not normally found in modern computer languages is the lazy evaluation of argument expressions. While this feature can be partially emulated using function closures, it can still provide exceedingly elegant implementation of algorithms like "spigot" algorithms, where the print function decides how many digits of precision to output, and "sucks" these digits out of the rest of the program. At 07:37 AM 6/10/2014, Fred Lunnon wrote:
I scratched my head over this, but now I think these "iterators" must be the same thing as loopless generators --- see http://en.wikipedia.org/wiki/Loopless_algorithm This topic has surfaced previously on math-fun, with respect to various combinatorial problems; two insights have arisen during my investigations.
Firstly, that a decent object-oriented / ADT programming environment greatly facilitates the implementation and useability of such tools.
Secondly, recursion -- hoary standby of so many programming courses -- and looplessness seem mutually incompatible. Converting one type of algorithm into the other is a process which somehow should be mechanical, but in practice always proves next to impossible in nontrivial situations.
Fred Lunnon
On 6/10/14, Joerg Arndt <arndt@jjj.de> wrote:
I missed this message...
I agree that a project "iterators" as suggested below appears worthwhile. The chances of adoption in other programs (say, Sage) appear low, however.
This is way I concentrate an finding new orders with interesting properties instead. "interesting properties" would be "changes only at end and of bounded length" for these subset-lex and SL-Gray orders.
It seems hard to believe that the Gray code for compositions isn't already known; I tried hard to find it in the literature, to no avail.
Best regards, jj
* rcs@xmission.com <rcs@xmission.com> [May 28. 2014 12:13]:
Once upon a time, I worked at a Lisp-based company. The main technical guru had built a package of "iterators" that fit into the Lisp loop sub-language. Simple iterators were to count through a range, to march down a list or array, and to run through special sets relevant to the company.
I later wrote a long list of iterator-combiners that would take cross-products of iterations in various orders, and handle instructions like "next", and perhaps "previous", "skip to next row", "goto position N", and so on. I never wrote any actual code.
I think this might be a worthwhile activity, at least as an intellectual exercise.
Rich