Yes, Arndt & Miller both had the same "cycle chasing" idea although Miller made the nice point the extra array of bits can be stored in the sign bits of the f(n) integer array, then reset to "+" afterwards. That runs in O(n) time and O(n) extra bits worth of storage. Also, it can be done in O(n^2) time and O(1) words storage by for(i=1 to n){ find cycle size for i, f(i), f(f(i))... } and then the sum of the cycle (cardinality-1) values, summed over all non-last elements in all cycles, has the same parity as the permutation. (You are the "last" cycle element if the maximum over all elements in your cycle. Keep track of running max.) Also, it can be done in T time and S words storage for any T with n<T<n^2 with T*S=n^2. ["Word" means order log(n) bits.] You find the first S elements of cycles longer than S. Then alter the permutation to shorten that cycle to length S, storing the alteration so can later undo it. In this way in storage<=n/S and time O(n) we shorten all cycles to lengths<=S. Then we do the above algorithm, with runtime<=n*S. Then we unalter the alterations. [There are also other ways to rethink that same algorithm involving use of hash tables for "memoization" instead of perm-alterations, etc; for example in the Arndt/Miller method, do not mark with a bit every guy; instead just mark every Kth guy round a cycle, then you can use K times less storage but each bit-lookup costs K times more. If we pre-choose a random-like function that is 1 a fraction 1/K of the time, and 0 the rest, and only mark bits where magicfunction(n)=1 we can avoid the log(n) storage factor and perhaps still get runtime n*K in some randomized expected sense.] It seems to me it is not possible to determine permutation parity if examine only n-2 entries of the f() array, since the other 2 entries could be crux. So linear time is best possible. However, that lower bound argument may have been too facile. -- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)