=mcintosh@servidor.unam.mx Entropy is p ln(p) summed over alternatives. It vanishes for p = 0 and p = 1, with a maximum somewhere inbetween, say at p = 1/e.
An interesting thing about this formula is that the maximum is independent of the base used for the logarithm. However, are you sure about that p factor? I thought the information content of a message was, roughly, how "surprising" it was, hence simply -ln(p) (which is why information gets called "negative entropy"). Getting an impossible message would be a miraculous epiphany, containing infinite information (alas of the form "everything you know is wrong!"<;-) I'll try to dig out my copy of Shannon's original paper and see what he said. By the way, is there an analogous "quantum entropy", which would involve taking the log of the "complex probability"?