Re: [math-fun] Packing question
Is the problem that the densest packings are not unique? I don't see this as a problem per se. For example: In the Kepler problem of packing spheres in 3D, the packings obtained by any sequence of what are sometimes called the "a", "b", and "c" packings in successive layers — as long as two successive layers are not given by the same letter — is equally densest. Which gives uncountably many ties. This may be surprising, but it's just the way things are and I see no need to arrange things so this not be the case. On the other hand, if asymptotic density is the only thing to maximize, I find it somewhat unpleasant that by starting with say the fcc packing but removing all (unit) spheres lying closer to the origin than a distance of 10^10^10^10^10^10^10^10^10^10 (exponents grouped from the top down) results in yet another packing tied for densest. One could avoid such "cheater" configurations by requiring that all sufficiently large configurations cannot be made denser by adding more spheres. I'm not sure if this condition is known to always allow the densest of all configurations to satisfy it. Certainly there are densest configurations in dimensions 1, 2, 8, and 24 that *do* satisfy it, however, and I believe no dimensions known (i.e., not dimension 3, either) where some densest configuration does not satisfy it. But in any case this cuts down on some of the unpleasantness. Jim: Before we get to how your theory might be constructed, I would like to understand much better what it is you are aiming to achieve or avoid. —Dan ----- Here's some background on where my recent question (so quickly resolved by Andy Latto) came from: The itch I'm trying to scratch is the fact that even though the sphere-packing problems in dimensions 2, 8, and 24 have beautiful canonical solutions, these solutions are only known to be "optimal" in ways I find unsatisfactory. We say for instance that the six-around-one packing in 2D "achieves the maximal packing density" --- but then so do lots of other packings (for instance, any perturbed version of the six-around-one packing). Or, we say that the six-around-one packing is, up to isometry, the "unique lattice packing", but the lattice condition seems extraneous (and indeed the lattice condition becomes problematic in higher dimensions, where it's not even known if densest lattice packings exist). My diagnosis of the problem is that density is too coarse a way to measure packings. We need a non-Archimedean measure (or rather valuation) that can "see" both the density of a lattice packing and the finite perturbations we can apply to it, even though they are separated by infinite orders of magnitude. I have various ideas for how such a theory could be constructed, but none of them feel quite right, and there are significant technical challenges to applying them. So, rather than attack the sphere-packings problem head-on, I'm taking a detour through one-dimensional packing problems. And, to make things even simpler, I'm starting by focussing on discrete packing problems. Furthermore, instead of packing Z, I'm just packing the right half of it. So I want a way to measure subsets of {0,1,2,...} that includes cardinality and density as special cases. One approach is to measure a set S by the limiting behavior of its generating function \sum_{n \in S} q^n as q approaches 1; you can see some low-hanging fruits of this idea at http://mathoverflow.net/questions/248994/comparing-sizes-of-sets-of-natural-... . I was trying to apply this approach to packing problems and related sorts of combinatorial optimization problems. For instance: Consider subsets of {0,1,2,...} having the property that no two elements of S differ by 3 or 5. In what sense is S_0 := {0,2,4,...} the unique "biggest" set of this kind? It has density 1/2, but so does S_1 := {1,3,5,...}. And the set S_2 := {0,1,2, 8,9,10, 16,17,19, ...} starts out looking denser than S_0. Examine at the associated q-series as q approaches 1: using the notation of the MathOverflow article, we have |S_0|_q > |S_1|_q > |S_2|_q for all q<1 sufficiently close to 1. So in this sense S_0 is "bigger" than the other two sets. I'd like to prove that for all finite sets D (of positive integers), not just the finite set {3,5}, there is a unique biggest set of natural numbers, S, such that no two elements of S differ by an element of D, where "biggest" denotes comparison of generating functions as q goes to 1. Naturally, I expect that this unique maximal S is an eventually periodic set. (Where I call a set eventually periodic, I mean of course that its indicator function is eventually periodic.) If I could prove this, a simple coding would give results about packings of the set of natural numbers by translates of a finite set. Anyone have any thoughts about this? -----
One could avoid such "cheater" configurations by requiring that all sufficiently large configurations cannot be made denser by adding more spheres.
I'm not sure if this condition is known to always allow the densest of all configurations to satisfy it.
Firstly, note that a packing with maximum density does exist: if you have some sequence of packings {P_1, P_2, P_3, ...}, where each is denser than the previous, then you can 'take the limit' by letting a new packing be equal to P_i between the spherical shells of radius 2^i and 2^{i+1}. So the supremum of all attainable packing densities is itself attaintable. Now, of all of the maximum density packings, you can order them by inclusion. Every chain has an upper bound (their union), so by Zorn's lemma you can find a maximal packing of maximum density. Clearly, that packing has maximum density and cannot be augmented by the addition of spheres. But it's still not a satisfactory constraint, because you could take the FCC lattice packing, clear out a large sphere (in the way you described), insert a large fragment of BCC lattice into the cavity, and Zorn up to a maximal packing. Instead I propose the following way to compare packings: For a discrete configuration of points C, we define f_C(R) to be the minimum, amongst all balls of radius R, of the number of points in C which lie within this ball. Then, we say that C dominates D (written C >= D) if there exists some R_0 such that for all R >= R_0, we have f_C(R) >= f_D(R). Clearly, if C has a strictly greater lower-density than D, then C > D. Also, if C is a proper superset of D, then C > D. Domination is a preorder -- so whilst it's transitive, there can be incomparable configurations as well as 'tied' configurations. Best wishes, Adam P. Goucher
Certainly there are densest configurations in dimensions 1, 2, 8, and 24 that *do* satisfy it, however, and I believe no dimensions known (i.e., not dimension 3, either) where some densest configuration does not satisfy it.
But in any case this cuts down on some of the unpleasantness.
Jim: Before we get to how your theory might be constructed, I would like to understand much better what it is you are aiming to achieve or avoid.
—Dan
----- Here's some background on where my recent question (so quickly resolved by Andy Latto) came from:
The itch I'm trying to scratch is the fact that even though the sphere-packing problems in dimensions 2, 8, and 24 have beautiful canonical solutions, these solutions are only known to be "optimal" in ways I find unsatisfactory. We say for instance that the six-around-one packing in 2D "achieves the maximal packing density" --- but then so do lots of other packings (for instance, any perturbed version of the six-around-one packing). Or, we say that the six-around-one packing is, up to isometry, the "unique lattice packing", but the lattice condition seems extraneous (and indeed the lattice condition becomes problematic in higher dimensions, where it's not even known if densest lattice packings exist).
My diagnosis of the problem is that density is too coarse a way to measure packings. We need a non-Archimedean measure (or rather valuation) that can "see" both the density of a lattice packing and the finite perturbations we can apply to it, even though they are separated by infinite orders of magnitude.
I have various ideas for how such a theory could be constructed, but none of them feel quite right, and there are significant technical challenges to applying them. So, rather than attack the sphere-packings problem head-on, I'm taking a detour through one-dimensional packing problems. And, to make things even simpler, I'm starting by focussing on discrete packing problems. Furthermore, instead of packing Z, I'm just packing the right half of it. So I want a way to measure subsets of {0,1,2,...} that includes cardinality and density as special cases. One approach is to measure a set S by the limiting behavior of its generating function \sum_{n \in S} q^n as q approaches 1; you can see some low-hanging fruits of this idea at http://mathoverflow.net/questions/248994/comparing-sizes-of-sets-of-natural-... .
I was trying to apply this approach to packing problems and related sorts of combinatorial optimization problems. For instance: Consider subsets of {0,1,2,...} having the property that no two elements of S differ by 3 or 5. In what sense is S_0 := {0,2,4,...} the unique "biggest" set of this kind? It has density 1/2, but so does S_1 := {1,3,5,...}. And the set S_2 := {0,1,2, 8,9,10, 16,17,19, ...} starts out looking denser than S_0. Examine at the associated q-series as q approaches 1: using the notation of the MathOverflow article, we have |S_0|_q > |S_1|_q > |S_2|_q for all q<1 sufficiently close to 1. So in this sense S_0 is "bigger" than the other two sets.
I'd like to prove that for all finite sets D (of positive integers), not just the finite set {3,5}, there is a unique biggest set of natural numbers, S, such that no two elements of S differ by an element of D, where "biggest" denotes comparison of generating functions as q goes to 1. Naturally, I expect that this unique maximal S is an eventually periodic set. (Where I call a set eventually periodic, I mean of course that its indicator function is eventually periodic.) If I could prove this, a simple coding would give results about packings of the set of natural numbers by translates of a finite set.
Anyone have any thoughts about this? -----
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Adam Goucher writes: Instead I propose the following way to compare packings:
For a discrete configuration of points C, we define f_C(R) to be the minimum, amongst all balls of radius R, of the number of points in C which lie within this ball.
Then, we say that C dominates D (written C >= D) if there exists some R_0 such that for all R >= R_0, we have f_C(R) >= f_D(R).
I like this, but I think the sharpness of the cutoff at distance R creates difficulties. For instance, try comparing two hexagonal close packings of points in the plane using this definition. My approach to avoiding these difficulties is a smoothed-out version of Adam's proposal. You give each point a weight that's close to 1 when the point is near the origin and goes to 0 as the point goes to infinity, with R replaced by a distance parameter that measures the spread. This is my integration/summation kernel. Of course it needs to be integrable/summable. This does not give us a partial order, but it does give us a sense in which all the hexagonal close packings appear to "beat" all the other point-packings in which no inter-point distances lie in (0,1). Jim
Dan (and others), Is the problem that the densest packings are not unique?
Not exactly. The problem is that under the prevalent notion of "densest", the densest packings aren't unique (up to isometry) in those *special* dimensions (2, 8, and 24) in which they morally "ought" to be unique.
I don't see this as a problem per se. For example: In the Kepler problem of packing spheres in 3D, the packings obtained by any sequence of what are sometimes called the "a", "b", and "c" packings in successive layers — as long as two successive layers are not given by the same letter — is equally densest. Which gives uncountably many ties. This may be surprising, but it's just the way things are and I see no need to arrange things so this not be the case.
I agree. However, in the theory I envision, it would be a theorem that these uncountably many packings, AND NO OTHERS, are the densest packings. So there'd be an explicit parametrization of the densest packings.
On the other hand, if asymptotic density is the only thing to maximize, I find it somewhat unpleasant that by starting with say the fcc packing but removing all (unit) spheres lying closer to the origin than a distance of 10^10^10^10^10^10^10^10^10^10 (exponents grouped from the top down) results in yet another packing tied for densest.
That's part of my motivation too.
Jim: Before we get to how your theory might be constructed, I would like to understand much better what it is you are aiming to achieve or avoid.
That's a great question, and one that I'll probably be able to answer correctly only after giving a few incorrect answers. But let me make a start. One characteristic of the theory should be that in two dimensions, the densest ways to pack disks of equal size are precisely the highly symmetrical six-around-one packings and nothing else. Similarly uniqueness claims should prevail in dimensions 8 and 24. (I also suspect that in three dimensions, the densest packings are the Barlow packings and nothing else.) I wrote something a few years back about this, which I've lightly edited and uploaded to http://jamespropp.org/thoughts-about-packings.pdf . There I use a quadratic-exponential integration kernel to regularize divergent integrals (which in this case are really divergent sums since the functions are infinite sums of delta functions). In the toy version of this theory that I've been exploring lately, where I'm just packing {0,1,2,3,...}, I can use a linear-exponential summation kernel, and life is simpler (since rational functions are nicer than theta functions!), but I still can't prove the uniqueness results that I sense are true. Jim
participants (3)
-
Adam P. Goucher -
Dan Asimov -
James Propp