[math-fun] branchless programming
I'm interested in algorithms which eliminate conditional branches: the following problem turns up in attempting to design a robust, straight-line, numerical quadratic equation solver, in the subcase of a double root. Find functions f = f(a,b,c), g = g(a,b,c) such that, given any 3 real numbers (a,b,c) /= (0,0,0) with b*b = a*c, it is guaranteed that (f*a + g*b, f*b + g*c) /= (0,0). One way to do this (harking back to a related discussion on math-fun a few years back) would be to set f = h(a a + b b), g = h(b b + c c), where define h(x) = ceiling(|x|) / max(1, ceiling(|x|)) = 0 when x is zero, 1 otherwise. [The absolute value is of course redundant in this application.] But I wonder whether it's possible to solve the problem more elegantly, using (say) polynomial f,g; alternatively, is there some obvious reason why this might be impossible? Fred Lunnon
On 10/22/07, Fred lunnon <fred.lunnon@gmail.com> wrote:
Find functions f = f(a,b,c), g = g(a,b,c) such that, given any 3 real numbers (a,b,c) /= (0,0,0) with b*b = a*c, it is guaranteed that (f*a + g*b, f*b + g*c) /= (0,0).
I'm not sure I understand exactly what is necessary here, but it sure reminds me of a Putnam problem, A1 from 196... 1962?? not sure exactly the year. The question: Find a real polynomial in two variables whose range is (0, infinity). yeah, that's an OPEN PARENTHESIS there, not [0, infinity). A few blank lines so as not to spoil your fun ... The solution, e.g.: x^2 + (1 - xy)^2 Actually that's not exactly the story: the Putnam proposer thought that no such polynomial existed and the original problem said something like "Does a polynomial exist ..." and only after a student exhibited this polynomial did they realize that the proof that no such polynomial existed. --Joshua Zucker
On 10/22/07, Fred lunnon <fred.lunnon@gmail.com> wrote:
... Find functions f = f(a,b,c), g = g(a,b,c) such that, given any 3 real numbers (a,b,c) /= (0,0,0) with b*b = a*c, it is guaranteed that (f*a + g*b, f*b + g*c) /= (0,0).
One way to do this (harking back to a related discussion on math-fun a few years back) would be to set f = h(a a + b b), g = h(b b + c c), where define h(x) = ceiling(|x|) / max(1, ceiling(|x|)) = 0 when x is zero, 1 otherwise. [The absolute value is of course redundant in this application.] ...
On 10/23/07, Joshua Zucker <joshua.zucker@gmail.com> wrote:
... I'm not sure I understand exactly what is necessary here,
Not surprisingly, since my bogus "solution" should have read f = h(a + b), g = 1. This works because (a+b, b+c) is nonzero unless a = -b = c, when (b, c) is nonzero. [Note that the in general equivalent inhomogeneous ratio (f b + g c)/(f a + g b) = b/a = c/b is independent of f,g.] Thinking about notation for a moment, perhaps there is a place here for a notation I have always despised: using say [a : b] = [b : c] to denote the ratio of two or more scalars --- or the equivalent projective (homogeneous) vector --- which is meaningful provided at least one component is nonzero.
but it sure reminds me of a Putnam problem ...
Nice story. There's a bunch of similar apparently innocuous questions, sufficiently off the beaten track to make it difficult to formulate any reliable relevant intuition. WFL
participants (2)
-
Fred lunnon -
Joshua Zucker