I've been looking again at what I call "pointer permutation" machines a la the Deutsch-Schorr-Waite (DSW) marking algorithm for garbage collection. Gosper utilized the PDP10 Exchange instruction to implement a Lisp-like CONS in HAKMEM. (DSW machines go back to a time even before the invention of Lisp to the IPL-V language of Carnegie Mellon (nee Tech), where IPL-V programs constructed linked lists in registers using an "assembly language" approach to list processing. IPL-V becomes a permutation machine if one crosses out the use of pointer copying operations.) (One of the reasons why I'm intrigued by these machines is that they can often be run *backwards*, as each permutation operation has an inverse. The other reason is that I know how to speed these machines up, by putting in actual reference counts behind the scenes, replacing DEEP-COPY by a constant-time refcnt increment operation, and using a "hash cons" scheme behind the scenes to make sure that deep recursive EQUAL can always be optimized into shallow EQ. (See my paper "Lively Linear Lisp", which explains these speedup tricks.)) DSW machines can only *permute* pointers, and can neither create nor destroy them. Therefore, the reference counts of the objects pointed to remain *constant*. There is a problem, however. Without some restrictions, pointers can become involved in *cycles*, and if the reference counts are all exactly *one*, then such a cycle becomes *disconnected* from the rest of the world. Therefore, it becomes important to elucidate rules which can guarantee that objects can never become disconnected. Machine model. A finite set of *registers* -- R0, R1, ..., Rk. (Perhaps k ~ 16 or k ~ 32.). A set of *atoms* which have no restrictions on reference counts. Think of atoms as small integers or characters. A set of *objects* which have two *components* -- CAR and CDR -- which can point to either *atoms* or other *objects*. A set of permutations on registers -- e.g., Ri -- and components of registers, e.g., CAR(Ri), CDR(Ri). These component expressions CAR(Ri), CDR(Ri) only make sense if Ri points to an object (not an atom). A set of *swapping* operations, e.g., Ri<->Rj (i /= j), or Ri<->CAR(Rj) (i /= j) or Ri<->CDR(Rj) (i /= j). We note that if Rj points to an atom, then Ri<->CAR(Rj) will FAIL without making any changes. We also allow for a *control stack*, which can store atoms and program return addresses *but not pointers*. Thus, we can have recursive programs, but it is up to the recursive program itself to save & restore any list structure pointers. [Here is a *huge* advantage for modern hack-proof programming: we should have little need for "address randomization" as currently practised by Microsoft & others.] A set of rotation permutation operations. We follow the Common Lisp convention in which rotations are a cyclic left shift. Thus, we can have rotate(Ri,Rj,Rk), where i,j,k are all distinct, and rotate(Ri,Rj,CAR(Rk)), i,j,k distinct, etc. Similarly, rotate(Ri,Rj,CAR(Rk)) fails if Rk points to an atom (w/o making any changes). We can test the contents of the registers for atom-ness, and for succinctness, we can also perform tests like atom(CAR(Ri)) and atom(CDR(Ri)). So far, it is difficult to get into trouble. However, we have also crimped a bit on expressiveness. It would be very nice to allow rotate(Ri,CDR(Ri),Rj), for instance. Suppose Ri = (A . B), Rj = C. Then after rotate(Ri,CDR(Ri),Rj), we have Ri = B, Rj = (A . C). In effect, we have done a "simultaneous"/"non-interruptible"/"stomic" pop and push: we have popped A off Ri and pushed it onto Rj. The inverse of this operation is rotate(Rj,CDR(Rj),Ri). Such "pop-push" operations are extremely common when programming with these machines. For example, if Ri is the freelist, and Rj points to C, then rotate(freelist,CDR(freelist),Rj) performs a *CONS* operation and leaves the result (NIL . C) in the register Rj. (This CONS is somewhat analogous to Gosper's EXCH CONS hack.) Note that rotate(CDR(Ri),Ri,Rj) still "works", and still preserves reference counts, but the cell originally pointed to by Ri becomes inaccessible. Thus, one rule we need is that in any rotate operation, the sub-sequence ...,CAR(Ri),Ri,... cannot occur; neither can the sub-sequence ...,CDR(Ri),Ri,... If one starts to allow more deeply nested operations, then the rule(s) become far more complicated. The general issue is the same: we don't want to create cycles of length 1, 2, 3, ..., which will then become inaccessible. Thus, rotation sub-sequences like ...,C*R(Ri),Ri,..., where C*R stands for CAR, CAAR, CADR, CDR, CDDR, etc., will also fail. Q: Has anyone studied machines like these in any depth (i.e., other than for implementing DSW GC algorithms) ? Q: Is there a more elegant way to express the constraints required to keep all objects connected to the registers by means of a directed path from one of the registers ? Q: Are more complex operations need/wanted for programming? I've found a use for a permutation that converts ((A . B) . C) into (A . (B . C)) and its inverse operator, but perhaps there are other extremely useful permutations ? Q: Are non-cyclic permutations useful or expressive? Clearly, any permutation can be broken into its cycle structure, but what happens if CAR(Ri) appears in one cycle and Ri appears in the other? Q: What is an appropriate "high level" language for such a machine. I've come up with Linear Lisp, in which each argument to a function is *consumed* by the called function, so that it either becomes part of the output list structure, or gets recycled back to the freelist by the called function. Perhaps there are other models?