Over the years, I've bored you all to tears talking about asinh numbers. Basically, asinh numbers represent a real number x as the integer: round(alpha*asinh(beta*x)) For example, if alpha=1/asinh(beta), then the asinh # for 1 is also 1. Since asinh(x) is an odd function, we get smooth transitions about 0. This means we can capture a brief segment of the integers within this representation, while also getting a logarithmic representation of numbers with very large absolute values. In particular, if beta ~ 2^(-22) and we have a 32-bit asinh # representation, then all of the signed 16-bit integers are mapped 1-1, but the overall range is ~ +-10^228. If beta ~ 2^(-46) and we have a 64-bit asinh # representation, then all of the signed 32-bit integers are mapped 1-1, but the overall range is ~ +-10^56937. asinh numbers are thus the *smooth* integration of Kahan IEEE "denormalized" floats with normalized floats. asinh numbers don't have a separate exponent and mantissa, and therefore don't require a "regime" number to specify the size of the exponent. Of course, we can adjust alpha & beta so that asinh #'s have the same denorm range as IEEE denorms. I claim that asinh numbers provide a much smoother (and therefore more accurate) set of numbers than Gustafson's "posits". If I have any time, I'll try to re-do some of Gustafson's calculations with asinh numbers.