[math-fun] formula for the distance between two disjoint affine subspaces
There is a formula giving the distance between two disjoint affine subspaces involving Gram determinants. Recall that if e_1, ..., e_m are vectors in R^n (m <= n) then Gram(e_1, .., e_m) = det(<e_i, e_j>), where <e_i, e_j> is the inner product of e_i and e_j. If U_1 and U_2 are disjoint affine subspaces, let V_1 and V_2 be their linear directions, that is, unique vector spaces such that U_1 = a_1 + V_1 and U_2 = a_2 + V_2, for any a_1\in U_1 and any a_2\in U_2. Pick any basis e_1, ..., e_m of V_1 + V_2 , then the square of the distance d(U_1, U_2) between U_1 and U_2 is given by d(U_1, U_2)^2 = Gram(a_1 - a_2, e_1, ..., e_m)/Gram(e_1, ..., e_m). In the special case where U_1 and U_2 are skew lines (i.e., not parallel), we also have the formula d(U_1, U_2)^2 = Gram(a - a', a - b, a' - b')/Gram(a - b, a' - b'), where a, b are any two distinct points on U_1 and a', b' any two distinct points on U_2. The above formula can be found in Berger, Geometry I, Chapter 9, Section 2. I found the more general formula in "Methodes Modernes en Geometrie", by Jean Fresnel, Part C, Section 1.4.2. The proof is not entirely trivial. See also Problem 7.13 of my "Geometric Methods and Applications", TAM 38, Springer-Verlag. Best, -- Jean Gallier
A more intelligent choice of ideal basis, involving Grassmann relations and proportional directions, reduces the 216-term degree-8 horror to a 96-term degree-7 horror; but further progress in this direction eluded me. However, following up an apparently improbable but actually rather astute observation by Lanco, reveals after all a perfectly civilised alternative formula for distance between parallel lines in 3-space; and suggests immediately a simple generalisation to cope with any parallel situation, subspaces X,Y intersecting only at infinity in a subspace of dimension n-1 : d(X,Y)^2 = ||<XºY>_{m-k-l-n}|| / ||<X•Y>_{k+l+n-2}|| ??!! For non-parallel lines L,M in 3-space, k = l = 2, m = 4, n = 0, and we have (as before) d(L,M)^2 = ||<LºM>_0|| / ||<L•M>_2|| ; or in terms of Pluecker coordinates, ( L_1 M^1 + L_2 M^2 + L_3 M^3 + L^1 M_1 + L^2 M_2 + L^3 M_3 )^2 ------------------------------------------------------------------------------------------------------------- ( (L^2 M^3 - L^3 M^2)^2 + (L^3 M^1 - L^1 M^3)^2 + (L^1 M^2 - L^2 M^1)^2 ) where L_i, L^i denote moment plane and direction vector resp, and (...)^2 denote squares rather than superscripts. If the lines are parallel, n = 1 and we have instead d(L,M)^2 = ||<LºM>_2||/||<L•M>_0|| ; or in terms of Pluecker coordinates, ( (L^1 M_2 + M^2 L_1 - L^2 M_1 - M^1 L_2)^2 + (L^2 M_3 + M^3 L_2 - L^3 M_2 - M^2 L_3)^2 + (L^3 M_1 + M^1 L_3 - L^1 M_3 - M^3 L_1)^2 ) -------------------------------------------------------------------------------- (L^3 M^3 + L^2 M^2 + L^1 M^1)^2 Proving this looked awkward, but assistance is at hand in the form of the earlier horror --- scaled to the same denominator, the difference of both parallel numerators reduces to zero modulo the ideal basis, QED. At the moment I have not extended my minimisation verifier to prove particular instances of k,l,m,n when n-1 > 0 --- this shouldn't be difficult --- but I'd rather prove the entire GA result directly! Jean Gallier's "Gram determinants" must presumably be equivalent to my earlier conjecture when n = 0 ; the revised conjecture suggests that the Fresnel theorem can also be generalised simply to parallel situations. Have I caught up with XX-th century theorems yet, I wonder? Or even managed to find an acceptably elegant expression for Henry Baker? Fred Lunnon On 3/28/11, Jean Gallier <jean@cis.upenn.edu> wrote:
There is a formula giving the distance between two disjoint affine subspaces involving Gram determinants.
Recall that if e_1, ..., e_m are vectors in R^n (m <= n) then Gram(e_1, .., e_m) = det(<e_i, e_j>), where <e_i, e_j> is the inner product of e_i and e_j.
If U_1 and U_2 are disjoint affine subspaces, let V_1 and V_2 be their linear directions, that is, unique vector spaces such that U_1 = a_1 + V_1 and U_2 = a_2 + V_2, for any a_1\in U_1 and any a_2\in U_2. Pick any basis e_1, ..., e_m of V_1 + V_2 , then the square of the distance d(U_1, U_2) between U_1 and U_2 is given by
d(U_1, U_2)^2 = Gram(a_1 - a_2, e_1, ..., e_m)/Gram(e_1, ..., e_m).
In the special case where U_1 and U_2 are skew lines (i.e., not parallel), we also have the formula
d(U_1, U_2)^2 = Gram(a - a', a - b, a' - b')/Gram(a - b, a' - b'),
where a, b are any two distinct points on U_1 and a', b' any two distinct points on U_2. The above formula can be found in Berger, Geometry I, Chapter 9, Section 2.
I found the more general formula in "Methodes Modernes en Geometrie", by Jean Fresnel, Part C, Section 1.4.2. The proof is not entirely trivial. See also Problem 7.13 of my "Geometric Methods and Applications", TAM 38, Springer-Verlag.
Best, -- Jean Gallier
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Apologies to everybody for the break in transmission, caused by a mild dose of influenza. Perhaps the fact that I was already under the pernicious influenze accounts for multiple gremlins sneaking into my earlier reckless announcement of a conjectured EGA formula for parallel subspace distance, which alas should have read: Let dimension-(k-1), (l-1) subspaces X, Y of Euclidean (m-1)-space intersect only at infinity, in a dimension (n-1)-subspace. Then the (minimum) distance d between pairs of points P in X, Q in Y satisfies d(X,Y)^2 = ||<XºY>_{m-k-l+2n}|| / ||<X•Y>_{k+l-2n-2}|| ; this caters for all degrees n of parallelism, from skew n = 0 to full n = min(k,l) - 1 ; indeed, appropriately interpreted, all other cases too. To establish the correct value of n in the first place, it is only necessary to increment n from 0 until the right-hand side evaluates to some determinate value, that is other than 0/0 [or we run out of components]. But there is an elegant way to conglomerate this sequence of tests into a single branchfree formula: d(X,Y)^2 = S^{m-2} ||(XºY)(1/S)|| / ||(X•Y)(S)|| with S -> 0 ; after cancelling powers of S, this evaluates to d^2 > 0 when the subspaces are disjoint at distance d; 0 the subspaces meet; oo some subspace lies at infinity; 0/0 both subspaces lie at infinity. Here Z(S) denotes the "grade expansion polynomial" (GEP) introduced on previous occasions, at any rate for even-grade Z : Z(S) == sum_j <Z>_{2j} S^j where S is a scalar variable, and <Z>_j denotes the terms of grade j. But we don't actually need Z to have even grade here: defining ||Z(S)|| = sum_j ||Z_j|| S^j gives a scalar polynomial for odd-grade and even-grade Z ! [In general if Z corresponds to a kinematic spinor (even grade with ||Z|| > 0), then Z(S) describes continuous motion from the identity Z(0) = 1 to Z(1) = Z ; the roots of the scalar magnitude ||Z(S)|| = 0 give the extents of the individual factors in the canonical unique decomposition of Z into orthogonal grade-2 rotors: in Euclidean space, these extents are negative squares of cotangents of rotational semi-angles.] So why might we expect something like the conjectured distance formula to work? Setting Z = X•Y --- or more strictly, Z = (X~)•Y --- we have Z~•X•Z = Y~•X*Y and Z•X•Z~ = X~•Y•X , whence Z reflects X in Y and Y in X ; so its rotation angles and translation distance will be double those relating the subspaces themselves. Unfortunately, though we can now find the principal (elliptic) angles from the roots of ||(X•Y)(S)||, the (parabolic) distance corresponds only to an infinite root: now somehow or other, the dual GEP mag ||(XºY)(S)|| provides the extra finesse required to resolve this parabolic extent --- but why? Notice that --- contrary to expectation --- it is NOT necessary to actually compute the meet of X and Y --- possibly non-minimal with dimension exceeding k+l-m-1 --- in order to evaluate the distance. With luck, perhaps we may not need the meet to prove it either? In fact, duality is connected with complementary orthogonality (Euclidean perpendicularity); it may be that a connection can be established between XºY and the mutual perpendicular of X and Y, along which axis a suitable translation can occur. I hope to take a more intelligent interest in other correspondents' recent contributions in due course, snuffle snuffle! Fred Lunnon On 3/28/11, Fred lunnon <fred.lunnon@gmail.com> wrote:
A more intelligent choice of ideal basis, involving Grassmann relations and proportional directions, reduces the 216-term degree-8 horror to a 96-term degree-7 horror; but further progress in this direction eluded me.
However, following up an apparently improbable but actually rather astute observation by Lanco, reveals after all a perfectly civilised alternative formula for distance between parallel lines in 3-space; and suggests immediately a simple generalisation to cope with any parallel situation, subspaces X,Y intersecting only at infinity in a subspace of dimension n-1 :
d(X,Y)^2 = ||<XºY>_{m-k-l-n}|| / ||<X•Y>_{k+l+n-2}|| ??!!
For non-parallel lines L,M in 3-space, k = l = 2, m = 4, n = 0, and we have (as before)
d(L,M)^2 = ||<LºM>_0|| / ||<L•M>_2|| ;
or in terms of Pluecker coordinates,
( L_1 M^1 + L_2 M^2 + L_3 M^3 + L^1 M_1 + L^2 M_2 + L^3 M_3 )^2
------------------------------------------------------------------------------------------------------------- ( (L^2 M^3 - L^3 M^2)^2 + (L^3 M^1 - L^1 M^3)^2 + (L^1 M^2 - L^2 M^1)^2 )
where L_i, L^i denote moment plane and direction vector resp, and (...)^2 denote squares rather than superscripts.
If the lines are parallel, n = 1 and we have instead
d(L,M)^2 = ||<LºM>_2||/||<L•M>_0|| ;
or in terms of Pluecker coordinates,
( (L^1 M_2 + M^2 L_1 - L^2 M_1 - M^1 L_2)^2 + (L^2 M_3 + M^3 L_2 - L^3 M_2 - M^2 L_3)^2 + (L^3 M_1 + M^1 L_3 - L^1 M_3 - M^3 L_1)^2 ) -------------------------------------------------------------------------------- (L^3 M^3 + L^2 M^2 + L^1 M^1)^2
Proving this looked awkward, but assistance is at hand in the form of the earlier horror --- scaled to the same denominator, the difference of both parallel numerators reduces to zero modulo the ideal basis, QED.
At the moment I have not extended my minimisation verifier to prove particular instances of k,l,m,n when n-1 > 0 --- this shouldn't be difficult --- but I'd rather prove the entire GA result directly!
Jean Gallier's "Gram determinants" must presumably be equivalent to my earlier conjecture when n = 0 ; the revised conjecture suggests that the Fresnel theorem can also be generalised simply to parallel situations.
Have I caught up with XX-th century theorems yet, I wonder? Or even managed to find an acceptably elegant expression for Henry Baker?
Fred Lunnon
On 3/28/11, Jean Gallier <jean@cis.upenn.edu> wrote:
There is a formula giving the distance between two disjoint affine subspaces involving Gram determinants.
Recall that if e_1, ..., e_m are vectors in R^n (m <= n) then Gram(e_1, .., e_m) = det(<e_i, e_j>), where <e_i, e_j> is the inner product of e_i and e_j.
If U_1 and U_2 are disjoint affine subspaces, let V_1 and V_2 be their linear directions, that is, unique vector spaces such that U_1 = a_1 + V_1 and U_2 = a_2 + V_2, for any a_1\in U_1 and any a_2\in U_2. Pick any basis e_1, ..., e_m of V_1 + V_2 , then the square of the distance d(U_1, U_2) between U_1 and U_2 is given by
d(U_1, U_2)^2 = Gram(a_1 - a_2, e_1, ..., e_m)/Gram(e_1, ..., e_m).
In the special case where U_1 and U_2 are skew lines (i.e., not parallel), we also have the formula
d(U_1, U_2)^2 = Gram(a - a', a - b, a' - b')/Gram(a - b, a' - b'),
where a, b are any two distinct points on U_1 and a', b' any two distinct points on U_2. The above formula can be found in Berger, Geometry I, Chapter 9, Section 2.
I found the more general formula in "Methodes Modernes en Geometrie", by Jean Fresnel, Part C, Section 1.4.2. The proof is not entirely trivial. See also Problem 7.13 of my "Geometric Methods and Applications", TAM 38, Springer-Verlag.
Best, -- Jean Gallier
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The conjectural parallel subspace distance formula d(X,Y)^2 = ||<XºY>_{m-k-l+2n}|| / ||<X•Y>_{k+l-2-2n}|| has now been proved for 0 <= n < k <= l < m <= 7 : all nontrivial cases from skew to fully parallel in space of up to 6 dimensions. The computation minimises the distance between a point P in X, where subspace X is in general position, and Q in Y, where subspaces Y and Z = meet(X,Y) at infinity are both fixed. Note that under a subsequent isometry moving Y (and Z) also into general position, the right-hand side above would remain invariant. The branch-free polynomial version however was still incorrect, with S -> 1/S accidentally: it should have read d(X,Y)^2 = ||(XºY)(S)|| / ||(X•Y)(1/S)|| S^(m-2) with S -> 0 . Lanco suggests that there is some connection between this stuff and the general meet problem, where the dimension of meet(X,Y) --- not now necessarily at infinity --- exceeds the minimum k+l-m-1 . He may well be right about this; but notice that we do need 2n rather than n in the distance formula, in order to avoid just getting 0/0 whenever n is odd! Similarly, any expression for the universal meet must somehow or other conjure a blade of grade 2m-k-l-n from grators with grades 2m-k-l-2i of fixed parity, irrespective of parity of n . I have uploaded to googledocs a ferocious summary TTT_EGA.txt of EGA as applied to this question. Feedback invited: the URL is https://docs.google.com/leaf?id=0B6QR93hqu1AhZTcyM2EzNzItYWYwNi00NDU3LTk3NzQ... Fred Lunnon
participants (2)
-
Fred lunnon -
Jean Gallier