On 17 Oct 93 on this list I mentioned getting B_n(x) from inverting a matrix made from Pascal's triangle. Here it is with some wrinkles. The Pascal matrix: (c1039) block([gcd:false], genmatrix(lambda([i,j],if j>i then 0 else binomial(i,j-1)/i/x^(i-1)),8)) [ 1 0 0 0 0 0 0 0 ] [ ] [ 1 2 ] [ --- --- 0 0 0 0 0 0 ] [ 2 x 2 x ] [ ] [ 1 3 3 ] [ ---- ---- ---- 0 0 0 0 0 ] [ 2 2 2 ] [ 3 x 3 x 3 x ] [ ] [ 1 4 6 4 ] [ ---- ---- ---- ---- 0 0 0 0 ] [ 3 3 3 3 ] [ 4 x 4 x 4 x 4 x ] [ ] [ 1 5 10 10 5 ] (d1039) [ ---- ---- ---- ---- ---- 0 0 0 ] [ 4 4 4 4 4 ] [ 5 x 5 x 5 x 5 x 5 x ] [ ] [ 1 6 15 20 15 6 ] [ ---- ---- ---- ---- ---- ---- 0 0 ] [ 5 5 5 5 5 5 ] [ 6 x 6 x 6 x 6 x 6 x 6 x ] [ ] [ 1 7 21 35 35 21 7 ] [ ---- ---- ---- ---- ---- ---- ---- 0 ] [ 6 6 6 6 6 6 6 ] [ 7 x 7 x 7 x 7 x 7 x 7 x 7 x ] [ ] [ 1 8 28 56 70 56 28 8 ] [ ---- ---- ---- ---- ---- ---- ---- ---- ] [ 7 7 7 7 7 7 7 7 ] [ 8 x 8 x 8 x 8 x 8 x 8 x 8 x 8 x ] The first approximation to its inverse: (c1040) (t^(1..8),coefmatrix(%%*x^(0..7),%%)) [ 1 0 0 0 0 0 0 0 ] [ ] [ 0 x 0 0 0 0 0 0 ] [ ] [ 2 ] [ 0 0 x 0 0 0 0 0 ] [ ] [ 3 ] [ 0 0 0 x 0 0 0 0 ] [ ] (d1040) [ 4 ] [ 0 0 0 0 x 0 0 0 ] [ ] [ 5 ] [ 0 0 0 0 0 x 0 0 ] [ ] [ 6 ] [ 0 0 0 0 0 0 x 0 ] [ ] [ 7 ] [ 0 0 0 0 0 0 0 x ] Newton step #1 for inverse: (c1041) 2*%-% . d1039 . % [ 1 0 0 0 0 0 0 0 ] [ ] [ 1 ] [ - - x 0 0 0 0 0 0 ] [ 2 ] [ ] [ 1 2 ] [ - - - x x 0 0 0 0 0 ] [ 3 ] [ ] [ 2 ] [ 1 3 x 3 ] [ - - - x - ---- x 0 0 0 0 ] [ 4 2 ] [ ] (d1041) [ 1 2 3 4 ] [ - - - x - 2 x - 2 x x 0 0 0 ] [ 5 ] [ ] [ 2 3 4 ] [ 1 5 x 10 x 5 x 5 ] [ - - - x - ---- - ----- - ---- x 0 0 ] [ 6 2 3 2 ] [ ] [ 1 2 3 4 5 6 ] [ - - - x - 3 x - 5 x - 5 x - 3 x x 0 ] [ 7 ] [ ] [ 2 4 6 ] [ 1 7 x 3 35 x 5 7 x 7 ] [ - - - x - ---- - 7 x - ----- - 7 x - ---- x ] [ 8 2 4 2 ] #2: (c1042) 2*%-% . D1039 . % (d1042) [ 1 0 0 0 0 0 0 0 ] [ ] [ 1 ] [ - - x 0 0 0 0 0 0 ] [ 2 ] [ ] [ 1 2 ] [ - - x x 0 0 0 0 0 ] [ 6 ] [ ] [ 2 ] [ x 3 x 3 ] [ 0 - - ---- x 0 0 0 0 ] [ 2 2 ] [ ] [ 23 2 3 4 ] [ - -- 0 x - 2 x x 0 0 0 ] [ 15 ] [ ] [ 3 4 ] [ 25 23 x 5 x 5 x 5 ] [ - -- - ---- 0 ---- - ---- x 0 0 ] [ 4 3 3 2 ] [ ] [ 4 ] [ 1573 75 x 2 5 x 5 6 ] [ - ---- - ---- - 23 x 0 ---- - 3 x x 0 ] [ 84 2 2 ] [ ] [ 2 3 5 6 ] [ 301 1573 x 525 x 161 x 7 x 7 x 7 ] [ - --- - ------ - ------ - ------ 0 ---- - ---- x ] [ 6 12 4 3 2 2 ] Finally, #3: (c1043) 2*%-% . d1039 . % [ 1 0 0 0 0 0 0 0 ] [ ] [ 1 ] [ - - x 0 0 0 0 0 0 ] [ 2 ] [ ] [ 1 2 ] [ - - x x 0 0 0 0 0 ] [ 6 ] [ ] [ 2 ] [ x 3 x 3 ] [ 0 - - ---- x 0 0 0 0 ] [ 2 2 ] [ ] [ 1 2 3 4 ] (d1043) [ - -- 0 x - 2 x x 0 0 0 ] [ 30 ] [ ] [ 3 4 ] [ x 5 x 5 x 5 ] [ 0 - - 0 ---- - ---- x 0 0 ] [ 6 3 2 ] [ ] [ 2 4 ] [ 1 x 5 x 5 6 ] [ -- 0 - -- 0 ---- - 3 x x 0 ] [ 42 2 2 ] [ ] [ 3 5 6 ] [ x 7 x 7 x 7 x 7 ] [ 0 - 0 - ---- 0 ---- - ---- x ] [ 6 6 2 2 ] The row sums are the first eight Bernoulli polynomials. This isn't a bad way to compute them, especially if you use the iteration x: x + (x - x.a.x), where 2^k subdiagonals of the matrix in parens vanish on the kth iteration. And there is no need to compute the unconverged diagonals: (c1112) genmatrix(lambda([i,j],if i-j>3 then [this,crud,doesn\'t,matter,one,bit,believe,it,"or","not"][binomial(i-4,2)+j] else d1042[i,j]),8) (d1112) [ 1 0 0 0 0 0 0 0 ] [ ] [ 1 ] [ - - x 0 0 0 0 0 0 ] [ 2 ] [ ] [ 1 2 ] [ - - x x 0 0 0 0 0 ] [ 6 ] [ ] [ 2 ] [ x 3 x 3 ] [ 0 - - ---- x 0 0 0 0 ] [ 2 2 ] [ ] [ 2 3 4 ] [ this 0 x - 2 x x 0 0 0 ] [ ] [ 3 4 ] [ 5 x 5 x 5 ] [ crud doesn't 0 ---- - ---- x 0 0 ] [ 3 2 ] [ ] [ 4 ] [ 5 x 5 6 ] [ matter one bit 0 ---- - 3 x x 0 ] [ 2 ] [ ] [ 5 6 ] [ 7 x 7 x 7 ] [ believe it or not 0 ---- - ---- x ] [ 2 2 ] (c1113) expand(2*%-% . d1039 . %) [ 1 0 0 0 0 0 0 0 ] [ ] [ 1 ] [ - - x 0 0 0 0 0 0 ] [ 2 ] [ ] [ 1 2 ] [ - - x x 0 0 0 0 0 ] [ 6 ] [ ] [ 2 ] [ x 3 x 3 ] [ 0 - - ---- x 0 0 0 0 ] [ 2 2 ] [ ] [ 1 2 3 4 ] (d1113) [ - -- 0 x - 2 x x 0 0 0 ] [ 30 ] [ ] [ 3 4 ] [ x 5 x 5 x 5 ] [ 0 - - 0 ---- - ---- x 0 0 ] [ 6 3 2 ] [ ] [ 2 4 ] [ 1 x 5 x 5 6 ] [ -- 0 - -- 0 ---- - 3 x x 0 ] [ 42 2 2 ] [ ] [ 3 5 6 ] [ x 7 x 7 x 7 x 7 ] [ 0 - 0 - ---- 0 ---- - ---- x ] [ 6 6 2 2 ] Further hack: only compute the next 2^k terms of the last row, and partially fill in the preceding rows by (trivial) differentiation. Finally, one can double the matrix size each iteration by filling in n diagonals via trivial integration, then running one Newton step. Two days earlier I gave [ n n ] m [ k - m (k - m) (x + k - 1) (x + k - 1) ] /===\ [ ----- -------------------- ------------ ] | | [ k + 1 k + 1 k ] (d241) ( | | [ ]) m = | | [ k - m 1 ] k = 1 [ 0 ----- - ] [ k + 1 k ] [ ] [ 0 0 1 ] [ 0 0 bernpoly(x, n) ] [ ] [ 0 0 harmonic(m) ] [ ] [ 0 0 m ] where m is any integer > n. Then, From conway@math.Princeton.EDU Sat Oct 16 18:55:31 1993 Message-Id: <9310162255.AA14049@broccoli.princeton.edu> To: math-fun@cs.arizona.edu, rwg@NEWTON.macsyma.com Subject: Re: gamma function Those coefficients are just the partial sums of the series (nC0)/n - (nC1)/(n-1) + (nC2)/(n-2) - ... with alternating signs. So this is a FANTASTIC explicit formula [equivalent to k - 1 ==== k \ n (- 1) binomial(n + 1, k) > (x + j) n + 1 / ==== ==== \ j = 0 bernpoly(x, n) = - > ---------------------------------------- / k ==== k = 1 --rwg] for the Bernoulli polynomials. May I have partial credit, or did you get it first? JHC To which I sniffed It's all yours. Only, why is a single product of [matrices of] rational functions less fantastic than a double sum of binomials? it simplifies further! jhc Yes...? ----- But he didn't say. The answer to my sniff is clearly that there are more uses for formulas than mere recipes for calculation. However, it turns out "It's all yours" was too generous. His double sum is term-for-term the m=n+1 case of my matrix product. The product of upper triangular 3x3s n /===\ [ a(k) b(k) c(k) ] | | [ ] | | [ 0 d(k) e(k) ] | | [ ] k = 1 [ 0 0 1 ] computes in its upper right element the triangular double sum j - 1 /===\ | | a(i) n k - 1 k - 1 ( | | ----) b(j) k - 1 ==== /===\ ==== | | d(i) /===\ \ | | a(r) \ i = 1 | | (*) > (c(k) | | ---- + > ----------------- e(k)) | | d(r). / | | d(r) / d(j) | | ==== r = 1 ==== r = 1 k = 1 j = 1 (Notice the pun on "triangular". They're unconnected, since the product of an upper triangular 4x4 computes a tetrahedral triple sum.) When fed the assignments of a(k),...,e(k) in (d241) above, the leftmost product in (*) merges with the inner sum as the j=0 term. There was also one of those ironic email misunderstandings: JHC> The definition I prefer is (in the exponential notation, rwg>Umbrage! JHC> which it's worth using) B^n = (B-1)^n for n > 1. Putting n = 2, and expanding formally (which is what the notation means): B^2 = B^2 - 2B^1 + 1, so B^1 = 1/2, [...], My interjection was just playful recognition of the umbral calculus, but Conway took it to be sincere umbrage to umbrage, and, unfortunately, went somewhat on the defensive. --rwg --------------------------------- Never miss a thing. Make Yahoo your homepage.