[math-fun] recognizing an even permutation
Yes, Arndt & Miller both had the same "cycle chasing" idea although Miller made the nice point the extra array of bits can be stored in the sign bits of the f(n) integer array, then reset to "+" afterwards. That runs in O(n) time and O(n) extra bits worth of storage. Also, it can be done in O(n^2) time and O(1) words storage by for(i=1 to n){ find cycle size for i, f(i), f(f(i))... } and then the sum of the cycle (cardinality-1) values, summed over all non-last elements in all cycles, has the same parity as the permutation. (You are the "last" cycle element if the maximum over all elements in your cycle. Keep track of running max.) Also, it can be done in T time and S words storage for any T with n<T<n^2 with T*S=n^2. ["Word" means order log(n) bits.] You find the first S elements of cycles longer than S. Then alter the permutation to shorten that cycle to length S, storing the alteration so can later undo it. In this way in storage<=n/S and time O(n) we shorten all cycles to lengths<=S. Then we do the above algorithm, with runtime<=n*S. Then we unalter the alterations. [There are also other ways to rethink that same algorithm involving use of hash tables for "memoization" instead of perm-alterations, etc; for example in the Arndt/Miller method, do not mark with a bit every guy; instead just mark every Kth guy round a cycle, then you can use K times less storage but each bit-lookup costs K times more. If we pre-choose a random-like function that is 1 a fraction 1/K of the time, and 0 the rest, and only mark bits where magicfunction(n)=1 we can avoid the log(n) storage factor and perhaps still get runtime n*K in some randomized expected sense.] It seems to me it is not possible to determine permutation parity if examine only n-2 entries of the f() array, since the other 2 entries could be crux. So linear time is best possible. However, that lower bound argument may have been too facile. -- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
Warren's question prompted me the think of a streaming version. Here's a specific question: Suppose that you need to write a program that has the following properties: at the beginning of the program you are given n -- the size of the permutation: f(1) ,,... f(n) You then are allowed to use k (some fixed constant) registers each of which can hold an integer between 1 and n. At each stage you can do some computation (which might be unlimited and use unlimited extra storage), and then request then next entry of the permutation. But, before you do, you're only allowed to keep the contents of the k registers. After getting the element f(n) you output "even" or "odd". Call the advantage of this procedure the number of correct outputs - the number of incorrect outputs divided by n! when given all n! permutations. How big an advantage is it possible to obtain as a function of k? On Fri, Apr 29, 2016 at 2:49 PM, Warren D Smith <warren.wds@gmail.com> wrote:
Yes, Arndt & Miller both had the same "cycle chasing" idea although Miller made the nice point the extra array of bits can be stored in the sign bits of the f(n) integer array, then reset to "+" afterwards.
That runs in O(n) time and O(n) extra bits worth of storage.
Also, it can be done in O(n^2) time and O(1) words storage by for(i=1 to n){ find cycle size for i, f(i), f(f(i))... } and then the sum of the cycle (cardinality-1) values, summed over all non-last elements in all cycles, has the same parity as the permutation. (You are the "last" cycle element if the maximum over all elements in your cycle. Keep track of running max.)
Also, it can be done in T time and S words storage for any T with n<T<n^2 with T*S=n^2. ["Word" means order log(n) bits.] You find the first S elements of cycles longer than S. Then alter the permutation to shorten that cycle to length S, storing the alteration so can later undo it. In this way in storage<=n/S and time O(n) we shorten all cycles to lengths<=S. Then we do the above algorithm, with runtime<=n*S. Then we unalter the alterations.
[There are also other ways to rethink that same algorithm involving use of hash tables for "memoization" instead of perm-alterations, etc; for example in the Arndt/Miller method, do not mark with a bit every guy; instead just mark every Kth guy round a cycle, then you can use K times less storage but each bit-lookup costs K times more. If we pre-choose a random-like function that is 1 a fraction 1/K of the time, and 0 the rest, and only mark bits where magicfunction(n)=1 we can avoid the log(n) storage factor and perhaps still get runtime n*K in some randomized expected sense.]
It seems to me it is not possible to determine permutation parity if examine only n-2 entries of the f() array, since the other 2 entries could be crux. So linear time is best possible. However, that lower bound argument may have been too facile.
-- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
My guess is that this can't be done. In general, I think the amount of state needed grows with n, and that there is no fixed k that can work for all n. Tom Victor Miller writes:
Warren's question prompted me the think of a streaming version. Here's a specific question:
Suppose that you need to write a program that has the following properties: at the beginning of the program you are given n -- the size of the permutation: f(1) ,,... f(n)
You then are allowed to use k (some fixed constant) registers each of which can hold an integer between 1 and n. At each stage you can do some computation (which might be unlimited and use unlimited extra storage), and then request then next entry of the permutation. But, before you do, you're only allowed to keep the contents of the k registers. After getting the element f(n) you output "even" or "odd". Call the advantage of this procedure the number of correct outputs - the number of incorrect outputs divided by n! when given all n! permutations. How big an advantage is it possible to obtain as a function of k?
On Fri, Apr 29, 2016 at 2:49 PM, Warren D Smith <warren.wds@gmail.com> wrote:
Yes, Arndt & Miller both had the same "cycle chasing" idea although Miller made the nice point the extra array of bits can be stored in the sign bits of the f(n) integer array, then reset to "+" afterwards.
That runs in O(n) time and O(n) extra bits worth of storage.
Also, it can be done in O(n^2) time and O(1) words storage by for(i=1 to n){ find cycle size for i, f(i), f(f(i))... } and then the sum of the cycle (cardinality-1) values, summed over all non-last elements in all cycles, has the same parity as the permutation. (You are the "last" cycle element if the maximum over all elements in your cycle. Keep track of running max.)
Also, it can be done in T time and S words storage for any T with n<T<n^2 with T*S=n^2. ["Word" means order log(n) bits.] You find the first S elements of cycles longer than S. Then alter the permutation to shorten that cycle to length S, storing the alteration so can later undo it. In this way in storage<=n/S and time O(n) we shorten all cycles to lengths<=S. Then we do the above algorithm, with runtime<=n*S. Then we unalter the alterations.
[There are also other ways to rethink that same algorithm involving use of hash tables for "memoization" instead of perm-alterations, etc; for example in the Arndt/Miller method, do not mark with a bit every guy; instead just mark every Kth guy round a cycle, then you can use K times less storage but each bit-lookup costs K times more. If we pre-choose a random-like function that is 1 a fraction 1/K of the time, and 0 the rest, and only mark bits where magicfunction(n)=1 we can avoid the log(n) storage factor and perhaps still get runtime n*K in some randomized expected sense.]
It seems to me it is not possible to determine permutation parity if examine only n-2 entries of the f() array, since the other 2 entries could be crux. So linear time is best possible. However, that lower bound argument may have been too facile.
-- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (3)
-
Tom Karzes -
Victor Miller -
Warren D Smith