Re: [math-fun] Lexical distance?
Marc wrote: << Usually when you sum a series, for pi, say, the leftmost digits tend to settle down first. Consequently, the approximations move closer in distance to the limit, and we say it converges in the familiar delta-epsilon way. Now instead of summing a series, imagine a process where the digits are all fizzing around, maybe even chaotically, but as time goes by a greater percentage (whatever that means) match the digits of the limit value. For example imagine each bit gets randomly flipped from its correct value with decreasing frequency. That feels a lot like what I mean intuitively by convergence, but it's all a bit sketchy. So I'm interested in learning of relevant prior art, best practices, or simply better ways to talk about this.
Given an oo-string s and sequence of oo-strings {s_n}, let A_n \sub Z+ be the set of places for which s and s_n agree. You want A_n to converge in some sense to all of Z+. Let's see, in purely set theoretic terms is this equivalent to saying that for every place k, there is some n = n(k) such that for j >= n(k) we have that k is in the set A_j ??? This is just what's going on when a sequence of points in the Cantor set K = {0,1}^oo converges to the point 1^oo, where the points of the Cantor set can be identified with subsets of Z+ as their indicator functions. So it's the same thing as when points in K converge under that "Hilbert" metric of dist(A_n, Z+) = sqrt( Sum_{k=1..oo} D(s(k), s_n(k))^2 / n^2 ), where t(k) is the kth component of the oo-string t, and D is the Kronecker delta. --Dan or maybe more to the point: Marc, is this what you want to happen? _____________________________________________________________________ "It don't mean a thing if it ain't got that certain je ne sais quoi." --Peter Schickele
participants (1)
-
Dan Asimov