Warren D. Smith: I assume you all know the rules of "go" on an NxN board. If N<=21 (where 21 is the board size contemplated for future professional play in case the current size 19 is ever deemed inadequate) then an entire row of the go board would fit in a single 64-bit word using 3 bits per location. Actually 2 bits would suffice in principle because each location is either BLACK, WHITE, or EMPTY, but I'm granting you one extra bit for fooling around.
This stores the board considerably more compactly than if you used a 2D array, hence fast board-copying; and it also allows "bit tricks" using word-wide parallelism to speed up common operations.
The QUESTION is how fast can you implement the following:
1A. Find & count all board-locations connected via a chain of horizontal and/or vertical adjacencies to a given location X and where all these locations are required to contain WHITE. 1B. Find & count all board-locations connected via a chain of horizontal and/or vertical adjacencies to a given location X and where all these locations are required to contain BLACK. 1C. Also count all board-locations containing EMPTY and adjacent to the location-set found by 1A (also ditto for 1B).
2A. Find and count all EMPTY board-locations connected via a chain of EMPTYs to WHITE. 2B. Find and count all EMPTY board-locations connected via a chain of EMPTYs to BLACK.
If these things can be made fast enough, then go program fancy board data structures, could be replaced with simple ones (but with somewhat complicated bit-trick algorithms).
--I did some initial thinking about this. It is unclear to me which is the faster approach in practice: conventional data structures or bitboard. If a 64-bit word stores a row of the board as follows: 19 bits: EMPTY(1) vs not (0) 1 zero bit as buffer 19 bits: BLACK(1) vs not (0) 1 zero bit 19 bits: WHITE(1) vs not (0) that's 59 bits consumed with 5 left over. Also you can do 21-21-21 with 1 bit left over, for a 21x21 go board (no buffer bits). An array of 19 such words stores the entire 19x19 go board. A typical initial set of bitboard operations could then include * copy * test equality * AND * OR * XOR * bit-complement * reverse colors * shift NORTH, SOUTH, EAST or WEST: The last by means of wordwide left-shift followed by ANDing with a predefined mask. The first by shifting the array 1 hop. * count black (AND with mask, then apply "popcount") * count white * count empty * compute hash function: A multiplicatively-weighted sum of all row-words regarded as binary numbers, done modulo a fixed prime modulus. * write something to board location (x,y): AND array[y] with PredefinedBitMask<<x, then OR with another. * read board location (x,y): AND (array[y]>>x) with PredefinedBitMask. * print board A key trick is this: for many operations like "find the black connected region containing a given point" you actually can perform three independent such operations in parallel using 3 black/not-bitboards stored in the same 19-array of 64-bit words. This factor-3 speedup is a big reason bitboards might be better. In "1 dimensional go," which also has some interest as a computer programming problem (not as a game) we can ask: QUESTION: What is the fastest way to compute the string of consecutive 1-bits in a 64-bit word W, which contain a given location X? Here X is a 64-bit word containing a single 1-bit. POSSIBLE ANSWER (untested): Z = X-1; if(Z==0){ Y = (W+1) XOR W; }else{ Y = ((1<<(64-CountLeadingZeros(Complement(W) AND Z))) + W) XOR W; } return( Y AND (Y>>1) ); I do not know a fast way if X is a non-singleton set and we ask for all the strings of 1s in W which contain some overlap with X.