I rather like the discussion of alternate bilinear forms and Pfaffians in Nathan Jacobson's "Basic Algebra", vol. 1. A bilinear form B over a vector space V is alternate if B(x,x) = 0 for all x in V. This implies B(x,y) = -B(y,x), but the reverse implication is true only for characteristic not 2. A bilinear form is degenerate if for some x, B(x,y) = 0 for all y in V. In a basis (e[1], ..., e[n]), a bilinear form is represented by a matrix B[i,j] = B(ei, ej), and if the form is alternate, its matrix is antisymmetric. A degenerate form yields a singular matrix. Since we want to show that the determinant is a square, we are interested only in nondegenerate alternate forms on a 2n-dimensional space V. The first task is to show that there exists a basis in which the matrix consists of n blocks along the diagonal, where each 2 x 2 block is [[0, 1], [-1, 0]], other matrix elements being zero. Since B is nondegenerate, there are vectors u1, v1 such that B(u1, v1) /= 0, and by suitable scaling we may assume B(u1, v1) = 1 = -B(v1, u1). Since B(x, ax) = 0, u1 and v1 must be independent. This gives us the first block. Suppose we have already found linearly independent vectors (u1, v1, ..., uk, vk) such that B(ui, vi) = 1 = -B(vi, ui), and B(x, y) = 0 for all other choices. Let Vk be the 2k-dimensional subspace spanned by these vectors, and let Vk[perp] be the subspace of all x such that B(x, y) = 0 for all y in V. Then the nondegeneracy of B implies that V is the direct sum of Vk and Vk[perp]. Jacobson shows this by a calculation. Since the restriction of B to Vk[perp] is nondegenerate, there exist vectors u[k+1], v[k+1] in Vk[perp] that extend the list of linearly independent vectors and have the required properties. So we have a basis in which B is represented by a matrix S consisting of n blocks, each [[0, 1], [-1, 0]]. We have det S = 1. If x = sum(x[i] e[i]), y = sum(y[j] e[j]) are vectors, B(x, y) = sum(x[i] B(e[i], e[j]) y[j]) = sum(x[i] B[i,j] y[j]) = x^t B y. (^t denotes transpose) Under a change of basis given by a matrix P, the matrix of B transforms as B' = P^t B P. So our antisymmetric matrix B can be expressed as B = P^t S P, and then det B = (det P^t) (det S) (det P) = (det P)^2. Thus det B is a square, but this is as an element of the field. However, the matrix whose determinant we wish to show is a square is one in which B[i,j] = x[i,j]. Let R be the ring Z[x[i,j]] of polynomials with integer coefficients in the (2n)(2n-1)/2 indeterminates x[i,j], i < j. Let F = Q(x[i,j]) be the fraction field of R. We want to show that det B is a square in R, but so far we only know that det B is a square in F, i.e. there are polynomials f and g in R such that det B = (f / g)^2. It is a theorem that if D is an integral domain with unique factorization, then so is the ring D[x1, ..., xn] of polynomials in n indeterminates over D. (Jacobson proves this in the same book.) Since Z is a UFD, so is R. Then we may assume common factors to be cancelled in the fraction f / g. So we have (g^2) (det B) = (f^2). Now, det B is a polynomial in x[i,j]; it is an element of R. If g is not a unit (a divisor of 1), then it has a prime factor p, and this p must divide f. This contradicts the choice of f and g, so g is a unit (+1 or -1), and det B = f^2 is a square in R. The sign of f is fixed by f(S) = 1, and then Pf B = f(B), and the Pfaffian is a polynomial in the elements of B. Gene