I recently attended a conference on modern parallel computers, and they are one heck of a lot different from what I grew up with. In particular, parallel redundant computations can be free. It occurred to me that the redundant representation of a complex number a+bi by the 2x2 matrix [a -b] [b a] may no longer be wasteful. In particular, multiplication of complex*complex may actually be faster using these matrices than when using pairs. If you have enough parallelism, addition/subtraction won't cost any more. Even norm(a+bi)=aa+bb=determinant(a+bi) can faster than when using pairs. The reason: it may cost more to try to read a & b simultaneously by more than one arithmetic processor, or by more than one part of a single arithmetic processor. Multi-ported memories are more expensive and slower than single-ported memories. Ditto for quaternions (heavily used in graphics processing today) represented by matrices. Since communication of these complex values can be expensive, it may pay to constantly convert back & forth between pair & 2x2 formats. --- I can imagine all sorts of numeric processes in which different formulae/methods are utilized, and some voting mechanism is used to select the "correct" answer. If some of the formulae/methods blow up due to bugs and/or singularities and/or ill-conditioning, hopefully there will be others to pick up the slack. In some of these computers, there may be tens of arithmetic units sitting around idle, so the redundancy may not cost anything (except perhaps some amount of electrical energy). In particular, one could conceive of performing a floating point calculation in many different orders simultaneously to see how divergent the values might be. --- It's a new day in computer land!