I looked at the transreal axioms, they look cobbed together. It looks like real arithmetic with +inf and -inf values, and a nullity value as a catchall for indeterminate results. Unintuitive things like ((-inf)^-1)^-1 = inf happen, so the transreals probably, beyond their lattice structure, probably do not have a pretty algebraic structure. In everyday programming, when you run into division by zero, it generally indicates an error condition or special case to be handled. However you detect the division by zero, you still have to deal with the condition that brought it about. Whether you say // Defensive programming if (b == 0) { c = a/b; } else { doSomethingSpecial(); } or // Exception handling try { c = a/b; } catch (DivideByZeroException) { doSomethingSpecial(); } or // Special value detection c = a/b; if (c == infinity or c == -infinity or c == nullity) { doSomethingSpecial(); } you are still required to detect the division by zero and invoke doSomethingSpecial() to handle it, otherwise your program fails. Transreal arithmetic does not alleviate you of the burden of properly handling arithmetic exceptions, it merely changes the syntax and knowledge required to detect them. Which is better, to detect problems when they happen and possibly fail, or let your program continue using meaningless values? Ultimately, failure to detect and handle arithmetic exceptions simply shifts the problem to another place in the program. If c = a/b; sustains an undetected division by zero and gets set to -infinity or infinity or nullity, what happens later on down the line when you try to execute x = array[c]; At some point, you have to detect the problem, and from a debugging standpoint, better sooner than later.