NJAS wrote:
I'm surprised that the 4x4 case hasn't been solved - can anyone help?
The fifteen puzzle is quite a bit harder than the Rubik's cube kind of puzzle, because it's not a group. The set of moves from a given position depend on where the blank is. Also, there is only two-fold symmetry to the puzzle, except in the variant where the home position of the blank is the center. There is also a variant in which sliding a row of tiles counts as a single move. I wrote a simple program to find the results for the eight-puzzle in all these variants (fancy search techniques are unnecessary for this size of problem). The answers I get are: Move Blank Maximum Number of maximal-distance positions and slide home distance (position of blank in those positions) tile corner 31 2 (adjacent edge) tile edge 31 2 (adjacent corner) tile center 30 148 (88 corner, 60 center) row corner 24 1 (center) row edge 24 2 (diagonally adjacent edge) row center 24 4 (corner) By the way, 16!/2 milliseconds is "only" about 331.5 years, and I'd guess the time taken to examine a position is probably more like a few dozen microseconds than a millisecond. But it's still cpu-years. Dan
From the AI point of view, the interesting problem isn't to get the optimal solution but to get a good solution with no more work than a human requires.
I have a program that does only a few times as much work as a human. It does the 399 puzzle in a few minutes.
At 04:25 PM 11/10/2003, Dan Hoey wrote:
By the way, 16!/2 milliseconds is "only" about 331.5 years, and I'd guess the time taken to examine a position is probably more like a few dozen microseconds than a millisecond. But it's still cpu-years.
I think it might take longer than that per position. In the reference for the 4x4 case, http://citeseer.nj.nec.com/cache/papers/cs/4003/http:zSzzSzwww.cs.ucla.eduzS... they found the optimal solution of 10 random starting configurations, and each of them generated over 8 billion nodes. Three of the 10 generated over a trillion nodes. Of course, there are probably ways to cut that down, and someone could come up with an insight into the problem.
At 06:12 PM 11/10/2003, Jud McCranie wrote:
In the reference for the 4x4 case,
Correction, that was 5x5.
I did a Google search for "15 puzzle" and "optimal solution", and I found this: http://citeseer.nj.nec.com/context/44966/0 which says that finding the optimal solution of nxn is NP-hard. This 1994 result http://www.informatik.uni-osnabrueck.de/papers_html/ai_94/node10.html mentions taking an average of 8 seconds per solution on a parallel computer with 1024 CPUs, but it states that the parallel computer wasn't fully utilized. It goes on to say that a Sun SparcStation 10/40 could do 660,000 nodes per second. Of course, that was 10 years ago, but there are a large number of nodes per position and a large number of positions.
1. I'm willing to bet a moderate amount of money that Michie's MI group did not compute the diameter of the 15-puzzle in the 1960s. 2. The puzzle doesn't have all that much exploitable symmetry. 3. Dan: you can turn it into a group, in a way, whose elements are the permutations that leave the empty square where it started. For the 15-puzzle, I can see a fairly simple set of nine generators, made up of 4, 6, 8, 10, or 12 atomic moves. I don't know how much this would help; I guess I'm hoping for some sort of theoretical insight. Cough it up, dude. 4. If you have lots of space, the obvious algorithm is a brute force breadth-first exploration of the entire puzzle. Berlekamp, Conway, and Guy did this, apparently by hand, for some sliding-block puzzles including the classic "Dad's Puzzle", in WW. In this kind of search, you maintain three lists of positions: the Previous list, the Current list, and the Next list. All the positions in each list are at the same remoteness from Solved. You take each position on the Current list and enumerate its neighbors. Those neighbors that are not found in any of the three lists are added to the Next list. When all the positions in the Current list have been considered in this fashion, you discard the Previous list, and make the Current list Previous, the Next list Current, and the new Next list empty. This algorithm is initialized by having the Previous list empty, the Current list containing only Solved, and the Next list empty. The n-th major iteration leaves the Next list populated by all positions at remoteness n from Solved. 5. If you don't have enough space to store the fattest layer (and believe me, you don't; it almost certainly requires terabytes), there are various tricks that allow you to save space at the cost of speed. 6. Interesting progress could be made in one of two ways: tricks that make it very fast to process a single position, and tricks that allow you to process fewer positions. I'd be fascinated to hear of any insight that might permit this computation to be done in less than a decade. -A
participants (4)
-
Allan C. Wechsler -
Dan Hoey -
John McCarthy -
Jud McCranie