[math-fun] Pegg's crashing computer
How did you find out what the problem was and how to fix it? (I have a similar crashing problem on my computer. I suspected it was a bad graphics chip.)
If you have crashing problems, first check the logs for clues. Failing that, run your machine's diagnostics, followed by memtest. Bad RAM is not *that* uncommon. I do all my real work on server-grade hardware with ECC RAM, and I avoid discrete graphics cards since between heat, hardware failures, and driver issues, they just seem more hassle than they are worth. I think I encounter about a bad DIMM a year. Working on a flaky computer is a danger to your sanity. Fix it or dump it. On Fri, Jan 29, 2016 at 11:14 AM, Warren D Smith <warren.wds@gmail.com> wrote:
How did you find out what the problem was and how to fix it? (I have a similar crashing problem on my computer. I suspected it was a bad graphics chip.)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
I finally figured it out by running a RAM checker at boot-up. There were about 6000 bad memory slots. On Tue, Feb 2, 2016 at 12:12 PM, Tom Rokicki <rokicki@gmail.com> wrote:
If you have crashing problems, first check the logs for clues. Failing that, run your machine's diagnostics, followed by memtest.
Bad RAM is not *that* uncommon. I do all my real work on server-grade hardware with ECC RAM, and I avoid discrete graphics cards since between heat, hardware failures, and driver issues, they just seem more hassle than they are worth. I think I encounter about a bad DIMM a year.
Working on a flaky computer is a danger to your sanity. Fix it or dump it.
On Fri, Jan 29, 2016 at 11:14 AM, Warren D Smith <warren.wds@gmail.com> wrote:
How did you find out what the problem was and how to fix it? (I have a similar crashing problem on my computer. I suspected it was a bad graphics chip.)
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
Suppose we have a finite sum of nonnegative numbers that equals 1: (*) x_1 + ... + x_n = 1 . Because TV viewers are so easily confused by several digits beyond the decimal point, we want to convert (*) to a sum of integer percentages P_j in the most felicitous, or the least infelicitous, way: P_1 + ... + P_n = 100% For example let n = 3. Then we want to map the points on the triangle {(x,y,z) in (R_0)^3 | x + y + z = 1} to the discrete set of 5151 points {(M,N,P) in (Z_0)^3 | M + N + P = 100} (where R_0 and Z_0 denote the nonnegative reals and integers, respectively). One obvious thing to try is to minimize the sum of squared error: So let (**) E(M,N,P) = (x-M)^2 + (y-N)^2 + (z-P)^2 and pick (M,N,P) such that E(M,N,P) is minimized. For the sake of this question, let's ignore the measure 0 of cases where the (M,N,P) minimizing (**) is not unique. Questions: ---------- 1. Let n > 0 be arbitrary. Is there a good algorithm for applying (**) in practice, to get the quantization (x_1,...,x_n) |—> (P_1,...,P_n) ??? 2. Is there a better or more useful measure of error than sum of squared differences? If so, why is it better? —Dan
In answer to (1), defining "good" as "fast", the appropriate algorithm is "round, determine total error (say, +e), "e" values to round the other way selecting those with the least individual error". This is a linear-time algorithm. Did you have another idea for what "good" might mean here? In this case minimizing sum of absolute error and minimizing sum of squared error is the same. On Tue, Feb 2, 2016 at 3:22 PM, Dan Asimov <asimov@msri.org> wrote:
Suppose we have a finite sum of nonnegative numbers that equals 1:
(*) x_1 + ... + x_n = 1
.
Because TV viewers are so easily confused by several digits beyond the decimal point, we want to convert (*) to a sum of integer percentages P_j in the most felicitous, or the least infelicitous, way:
P_1 + ... + P_n = 100%
For example let n = 3. Then we want to map the points on the triangle
{(x,y,z) in (R_0)^3 | x + y + z = 1}
to the discrete set of 5151 points
{(M,N,P) in (Z_0)^3 | M + N + P = 100}
(where R_0 and Z_0 denote the nonnegative reals and integers, respectively).
One obvious thing to try is to minimize the sum of squared error:
So let
(**) E(M,N,P) = (x-M)^2 + (y-N)^2 + (z-P)^2
and pick (M,N,P) such that E(M,N,P) is minimized.
For the sake of this question, let's ignore the measure 0 of cases where the (M,N,P) minimizing (**) is not unique.
Questions: ----------
1. Let n > 0 be arbitrary. Is there a good algorithm for applying (**) in practice, to get the quantization (x_1,...,x_n) |—> (P_1,...,P_n) ???
2. Is there a better or more useful measure of error than sum of squared differences? If so, why is it better?
—Dan _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
In the case of n = 3, we need to figure out which hexagonal Voronoi region a given triple (x,y,z) is in. As n increases beyond 3, the Voronoi regions become increasingly complicated polytopes. Suppose n is 30. The number of possible ways to round up or down each of 30 numbers is over a billion. Maybe there's a faster way to decide which Voronoi region (x_1,...,x_30) belongs to. Then again, maybe not. —Dan
On Feb 2, 2016, at 3:44 PM, Tom Rokicki <rokicki@gmail.com> wrote:
In answer to (1), defining "good" as "fast", the appropriate algorithm is "round, determine total error (say, +e), "e" values to round the other way selecting those with the least individual error". This is a linear-time algorithm. Did you have another idea for what "good" might mean here?
In this case minimizing sum of absolute error and minimizing sum of squared error is the same.
On Tue, Feb 2, 2016 at 3:22 PM, Dan Asimov <asimov@msri.org> wrote:
Suppose we have a finite sum of nonnegative numbers that equals 1:
(*) x_1 + ... + x_n = 1
.
Because TV viewers are so easily confused by several digits beyond the decimal point, we want to convert (*) to a sum of integer percentages P_j in the most felicitous, or the least infelicitous, way:
P_1 + ... + P_n = 100%
For example let n = 3. Then we want to map the points on the triangle
{(x,y,z) in (R_0)^3 | x + y + z = 1}
to the discrete set of 5151 points
{(M,N,P) in (Z_0)^3 | M + N + P = 100}
(where R_0 and Z_0 denote the nonnegative reals and integers, respectively).
One obvious thing to try is to minimize the sum of squared error:
So let
(**) E(M,N,P) = (x-M)^2 + (y-N)^2 + (z-P)^2
and pick (M,N,P) such that E(M,N,P) is minimized.
For the sake of this question, let's ignore the measure 0 of cases where the (M,N,P) minimizing (**) is not unique.
Questions: ----------
1. Let n > 0 be arbitrary. Is there a good algorithm for applying (**) in practice, to get the quantization (x_1,...,x_n) |—> (P_1,...,P_n) ???
2. Is there a better or more useful measure of error than sum of squared differences? If so, why is it better?
—Dan _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
The Voronoi cells are permutohedra: https://en.wikipedia.org/wiki/Permutohedron On Tue, Feb 2, 2016 at 4:01 PM, Dan Asimov <asimov@msri.org> wrote:
In the case of n = 3, we need to figure out which hexagonal Voronoi region a given triple (x,y,z) is in.
As n increases beyond 3, the Voronoi regions become increasingly complicated polytopes.
Suppose n is 30. The number of possible ways to round up or down each of 30 numbers is over a billion. Maybe there's a faster way to decide which Voronoi region (x_1,...,x_30) belongs to.
Then again, maybe not.
—Dan
On Feb 2, 2016, at 3:44 PM, Tom Rokicki <rokicki@gmail.com> wrote:
In answer to (1), defining "good" as "fast", the appropriate algorithm is "round, determine total error (say, +e), "e" values to round the other way selecting those with the least individual error". This is a linear-time algorithm. Did you have another idea for what "good" might mean here?
In this case minimizing sum of absolute error and minimizing sum of squared error is the same.
On Tue, Feb 2, 2016 at 3:22 PM, Dan Asimov <asimov@msri.org> wrote:
Suppose we have a finite sum of nonnegative numbers that equals 1:
(*) x_1 + ... + x_n = 1
.
Because TV viewers are so easily confused by several digits beyond the decimal point, we want to convert (*) to a sum of integer percentages P_j in the most felicitous, or the least infelicitous, way:
P_1 + ... + P_n = 100%
For example let n = 3. Then we want to map the points on the triangle
{(x,y,z) in (R_0)^3 | x + y + z = 1}
to the discrete set of 5151 points
{(M,N,P) in (Z_0)^3 | M + N + P = 100}
(where R_0 and Z_0 denote the nonnegative reals and integers, respectively).
One obvious thing to try is to minimize the sum of squared error:
So let
(**) E(M,N,P) = (x-M)^2 + (y-N)^2 + (z-P)^2
and pick (M,N,P) such that E(M,N,P) is minimized.
For the sake of this question, let's ignore the measure 0 of cases where the (M,N,P) minimizing (**) is not unique.
Questions: ----------
1. Let n > 0 be arbitrary. Is there a good algorithm for applying (**) in practice, to get the quantization (x_1,...,x_n) |—> (P_1,...,P_n) ???
2. Is there a better or more useful measure of error than sum of squared differences? If so, why is it better?
—Dan _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
On Feb 2, 2016, at 6:22 PM, Dan Asimov <asimov@msri.org> wrote:
Questions: ----------
1. Let n > 0 be arbitrary. Is there a good algorithm for applying (**) in practice, to get the quantization (x_1,...,x_n) |—> (P_1,...,P_n) ???
2. Is there a better or more useful measure of error than sum of squared differences? If so, why is it better?
—Dan
I wonder what you were watching last night ... The squared distance measure is standard when quantizing signals: J.H. Conway and N.J.A. Sloane, Fast quantizing and decoding algorithms for lattice quantizers and codes, IEEE Trans. Inf. Theory 28, 227-232 (1982). The lattice for your problem is called A_(n-1). The quantization algorithm for this lattice is given in the Conway-Sloane paper. Here’s a Mma implementation: proj[x_, c_] := Module[{y, z, d, p, ip, zp}, y = c x/Total[x]; z = Round[y]; d = Total[z] - c; If[d == 0, Return[z]]; p = Ordering[y - z]; ip = InversePermutation[p]; zp = z[[p]]; If[d > 0, Join[Take[zp, d] - 1, Drop[zp, d]][[ip]], Join[Drop[zp, d], Take[zp, d] + 1][[ip]]] ] x = {Pi, E, GoldenRatio}; proj[x, 100] {42, 36, 22} -Veit
Thanks, everyone. Veit's answer is just what I was wondering about. —Dan
On Feb 2, 2016, at 4:00 PM, Veit Elser <ve10@cornell.edu> wrote:
The squared distance measure is standard when quantizing signals:
J.H. Conway and N.J.A. Sloane, Fast quantizing and decoding algorithms for lattice quantizers and codes, IEEE Trans. Inf. Theory 28, 227-232 (1982).
. . . . . .
Is the MMA algorithm a faithful implementation of Conway and Sloane's algorithm? The MMA algorithm is O(n log n) due to Ordering[], but there's a well-known linear-time algorithm; certainly Conway and Sloane's algorithm is linear-time? -tom On Tue, Feb 2, 2016 at 4:00 PM, Veit Elser <ve10@cornell.edu> wrote:
On Feb 2, 2016, at 6:22 PM, Dan Asimov <asimov@msri.org> wrote:
Questions: ----------
1. Let n > 0 be arbitrary. Is there a good algorithm for applying (**) in practice, to get the quantization (x_1,...,x_n) |—> (P_1,...,P_n) ???
2. Is there a better or more useful measure of error than sum of squared differences? If so, why is it better?
—Dan
I wonder what you were watching last night ...
The squared distance measure is standard when quantizing signals:
J.H. Conway and N.J.A. Sloane, Fast quantizing and decoding algorithms for lattice quantizers and codes, IEEE Trans. Inf. Theory 28, 227-232 (1982).
The lattice for your problem is called A_(n-1). The quantization algorithm for this lattice is given in the Conway-Sloane paper. Here’s a Mma implementation:
proj[x_, c_] := Module[{y, z, d, p, ip, zp}, y = c x/Total[x]; z = Round[y]; d = Total[z] - c; If[d == 0, Return[z]];
p = Ordering[y - z]; ip = InversePermutation[p]; zp = z[[p]]; If[d > 0, Join[Take[zp, d] - 1, Drop[zp, d]][[ip]], Join[Drop[zp, d], Take[zp, d] + 1][[ip]]] ]
x = {Pi, E, GoldenRatio};
proj[x, 100] {42, 36, 22}
-Veit _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
On Feb 2, 2016, at 7:53 PM, Tom Rokicki <rokicki@gmail.com> wrote:
Is the MMA algorithm a faithful implementation of Conway and Sloane's algorithm?
The MMA algorithm is O(n log n) due to Ordering[], but there's a well-known linear-time algorithm; certainly Conway and Sloane's algorithm is linear-time?
-tom
from C&S: "The only substantial amount of computation needed is for the sort in Step 3, which takes O(n log n) steps" My Mma code is doing pretty much exactly what I subsequently saw in your beautifully terse description: "round, determine total error (say, +e), "e" values to round the other way selecting those with the least individual error”. How do you identify the “e” elements with least individual error without doing a sort? -Veit
Here's C++ code. The key is the linear-time selection implemented by nth_element. (This code is untested.) vector<int> quant(const vector<double> &x, int goalsum) { vector<pair<double, int>> a(x.size()) ; // [i] -> (fractionalpart, index) vector<int> r(x.size()) ; size_t total_error = (size_t)goalsum ; for (size_t i=0; i<x.size(); i++) { int f = (int)floor(x[i]*goalsum) ; r[i] = f ; a[i] = make_pair(f-x[i], i) ; total_error -= f ; } nth_element(a.begin(), a.begin()+total_error, a.end()) ; for (size_t i=0; i<total_error; i++) r[a[i].second]++ ; return r ; } On Tue, Feb 2, 2016 at 5:13 PM, Veit Elser <ve10@cornell.edu> wrote:
On Feb 2, 2016, at 7:53 PM, Tom Rokicki <rokicki@gmail.com> wrote:
Is the MMA algorithm a faithful implementation of Conway and Sloane's algorithm?
The MMA algorithm is O(n log n) due to Ordering[], but there's a well-known linear-time algorithm; certainly Conway and Sloane's algorithm is linear-time?
-tom
from C&S: "The only substantial amount of computation needed is for the sort in Step 3, which takes O(n log n) steps"
My Mma code is doing pretty much exactly what I subsequently saw in your beautifully terse description:
"round, determine total error (say, +e), "e" values to round the other way selecting those with the least individual error”.
How do you identify the “e” elements with least individual error without doing a sort?
-Veit _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] --
These remind me of the various rules in social choice theory (Jefferson/D'Hondt, Hamilton, Sainte-Laguë, etc.) for allocating integer numbers of seats based on their proportional vote share. Lots of fun paradoxes there depending on how you do it, most famously (?) the Alabama paradox, in which Alabama would get 8 representatives if there were 299 representatives in the House but only 7 if one more seat was added. Charles Greathouse Analyst/Programmer Case Western Reserve University On Tue, Feb 2, 2016 at 8:16 PM, Tom Rokicki <rokicki@gmail.com> wrote:
Here's C++ code. The key is the linear-time selection implemented by nth_element. (This code is untested.)
vector<int> quant(const vector<double> &x, int goalsum) { vector<pair<double, int>> a(x.size()) ; // [i] -> (fractionalpart, index) vector<int> r(x.size()) ; size_t total_error = (size_t)goalsum ; for (size_t i=0; i<x.size(); i++) { int f = (int)floor(x[i]*goalsum) ; r[i] = f ; a[i] = make_pair(f-x[i], i) ; total_error -= f ; } nth_element(a.begin(), a.begin()+total_error, a.end()) ; for (size_t i=0; i<total_error; i++) r[a[i].second]++ ; return r ; }
On Tue, Feb 2, 2016 at 5:13 PM, Veit Elser <ve10@cornell.edu> wrote:
On Feb 2, 2016, at 7:53 PM, Tom Rokicki <rokicki@gmail.com> wrote:
Is the MMA algorithm a faithful implementation of Conway and Sloane's algorithm?
The MMA algorithm is O(n log n) due to Ordering[], but there's a well-known linear-time algorithm; certainly Conway and Sloane's algorithm is linear-time?
-tom
from C&S: "The only substantial amount of computation needed is for the sort in Step 3, which takes O(n log n) steps"
My Mma code is doing pretty much exactly what I subsequently saw in your beautifully terse description:
"round, determine total error (say, +e), "e" values to round the other way selecting those with the least individual error”.
How do you identify the “e” elements with least individual error without doing a sort?
-Veit _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
-- -- http://cube20.org/ -- [ <http://golly.sf.net/>Golly link suppressed; ask me why] -- _______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (8)
-
Charles Greathouse -
Dan Asimov -
Dan Asimov -
Ed Pegg Jr -
Mike Stay -
Tom Rokicki -
Veit Elser -
Warren D Smith