One could avoid such "cheater" configurations by requiring that all sufficiently large configurations cannot be made denser by adding more spheres.
I'm not sure if this condition is known to always allow the densest of all configurations to satisfy it.
Firstly, note that a packing with maximum density does exist: if you have some sequence of packings {P_1, P_2, P_3, ...}, where each is denser than the previous, then you can 'take the limit' by letting a new packing be equal to P_i between the spherical shells of radius 2^i and 2^{i+1}. So the supremum of all attainable packing densities is itself attaintable. Now, of all of the maximum density packings, you can order them by inclusion. Every chain has an upper bound (their union), so by Zorn's lemma you can find a maximal packing of maximum density. Clearly, that packing has maximum density and cannot be augmented by the addition of spheres. But it's still not a satisfactory constraint, because you could take the FCC lattice packing, clear out a large sphere (in the way you described), insert a large fragment of BCC lattice into the cavity, and Zorn up to a maximal packing. Instead I propose the following way to compare packings: For a discrete configuration of points C, we define f_C(R) to be the minimum, amongst all balls of radius R, of the number of points in C which lie within this ball. Then, we say that C dominates D (written C >= D) if there exists some R_0 such that for all R >= R_0, we have f_C(R) >= f_D(R). Clearly, if C has a strictly greater lower-density than D, then C > D. Also, if C is a proper superset of D, then C > D. Domination is a preorder -- so whilst it's transitive, there can be incomparable configurations as well as 'tied' configurations. Best wishes, Adam P. Goucher
Certainly there are densest configurations in dimensions 1, 2, 8, and 24 that *do* satisfy it, however, and I believe no dimensions known (i.e., not dimension 3, either) where some densest configuration does not satisfy it.
But in any case this cuts down on some of the unpleasantness.
Jim: Before we get to how your theory might be constructed, I would like to understand much better what it is you are aiming to achieve or avoid.
—Dan
----- Here's some background on where my recent question (so quickly resolved by Andy Latto) came from:
The itch I'm trying to scratch is the fact that even though the sphere-packing problems in dimensions 2, 8, and 24 have beautiful canonical solutions, these solutions are only known to be "optimal" in ways I find unsatisfactory. We say for instance that the six-around-one packing in 2D "achieves the maximal packing density" --- but then so do lots of other packings (for instance, any perturbed version of the six-around-one packing). Or, we say that the six-around-one packing is, up to isometry, the "unique lattice packing", but the lattice condition seems extraneous (and indeed the lattice condition becomes problematic in higher dimensions, where it's not even known if densest lattice packings exist).
My diagnosis of the problem is that density is too coarse a way to measure packings. We need a non-Archimedean measure (or rather valuation) that can "see" both the density of a lattice packing and the finite perturbations we can apply to it, even though they are separated by infinite orders of magnitude.
I have various ideas for how such a theory could be constructed, but none of them feel quite right, and there are significant technical challenges to applying them. So, rather than attack the sphere-packings problem head-on, I'm taking a detour through one-dimensional packing problems. And, to make things even simpler, I'm starting by focussing on discrete packing problems. Furthermore, instead of packing Z, I'm just packing the right half of it. So I want a way to measure subsets of {0,1,2,...} that includes cardinality and density as special cases. One approach is to measure a set S by the limiting behavior of its generating function \sum_{n \in S} q^n as q approaches 1; you can see some low-hanging fruits of this idea at http://mathoverflow.net/questions/248994/comparing-sizes-of-sets-of-natural-... .
I was trying to apply this approach to packing problems and related sorts of combinatorial optimization problems. For instance: Consider subsets of {0,1,2,...} having the property that no two elements of S differ by 3 or 5. In what sense is S_0 := {0,2,4,...} the unique "biggest" set of this kind? It has density 1/2, but so does S_1 := {1,3,5,...}. And the set S_2 := {0,1,2, 8,9,10, 16,17,19, ...} starts out looking denser than S_0. Examine at the associated q-series as q approaches 1: using the notation of the MathOverflow article, we have |S_0|_q > |S_1|_q > |S_2|_q for all q<1 sufficiently close to 1. So in this sense S_0 is "bigger" than the other two sets.
I'd like to prove that for all finite sets D (of positive integers), not just the finite set {3,5}, there is a unique biggest set of natural numbers, S, such that no two elements of S differ by an element of D, where "biggest" denotes comparison of generating functions as q goes to 1. Naturally, I expect that this unique maximal S is an eventually periodic set. (Where I call a set eventually periodic, I mean of course that its indicator function is eventually periodic.) If I could prove this, a simple coding would give results about packings of the set of natural numbers by translates of a finite set.
Anyone have any thoughts about this? -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun