On 9/1/12, Michael Kleber <michael.kleber@gmail.com> wrote:
There's no theoretical way to prevent this, right? That is, any system in which connectivity might be lost must in some case either claim a message was sent when it wasn't, or that it wasn't when it was?
Theory might deny perfection, but Gmail is clearly far far below what's possible. Given a connection that is instantaneously unreliable, but intermittently sustained over the long term, one can get the odds of successful non-duplicated message delivery to within 2^-K of certainty. For example, TCP determines that each message (or "packet") is sent exactly once by requiring that the sender and receiver agree on a sequence number. The sender can't begin trying to send packet N+1 until it receives confirmation of the receipt of packet N. http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Reliable_transmis... The reliability of the algorithm depends on the number of bits and the goodness of the hashfunction. TCP only uses a 16-bit checksum, but of course that can be improved. With CRC-64 everyone in the world could send a million messages before anyone would expect to see error. (For efficiency, modern protocols have a "window", allowing the sender and receiver to be retrying on multiple messages at any given time, so long as the leading edge doesn't get too far ahead of the trailing edge.) -- Robert Munafo -- mrob.com Follow me at: gplus.to/mrob - fb.com/mrob27 - twitter.com/mrob_27 - mrob27.wordpress.com - youtube.com/user/mrob143 - rilybot.blogspot.com