About 10 years ago I became interested in the following problem: consider the set of all possible (binary, say) strings of length m, and a given (shorter) word w. Denote by g(k, m) the number of strings in which w occurs as a factor exactly k times. How can we compute g(k, m) efficiently? As m -> oo, g(k, m) appropriately scaled converges to an analytic function which is approximately the normal distribution. What does this function look like in detail: in particular, how does its standard deviation depend on w? To my surprise, almost nothing seemed to be known about these problems; furthermore, the probability journals proved utterly uninterested in any exploration of them! I still don't understand why such apparently natural questions remain so neglected. A related question concerned the distribution of the intervals between successive occurrences of w in a random sequence, which offers an alternative approach to measuring the local deviation of a sequence from oo-distribution, avoiding any involvement of infinite sums or the number of terms N. The relevant results for a binary random sequence are as follows: the maximum standard deviation of intervals between successive occurrences equals sqrt(2^m*(3*2^m-2*m-3)), for the word w = 0^m (or 1^m); the minimum equals sqrt(2^m*(2^m-2*m+1)), for w = 10^{m-1} (or any word lacking self-overlap). For example if m = 5, max, min = 51.54, 27.13 for 00000, 10000 resp. Any asymptotic reduction below these deviations is "better than random". In contrast, typical values for a block of Gray-Champernowne were max, min = 111.8, 39.36 (uuurghhh!) Fred Lunnon