Re: [math-fun] Flakey and/or compromised computer arithmetic
"Blinding", thanks! Is there an algebraic theory of blinding that is sufficiently general to cover both crypto blinding and things like "pre-conditioning" in numerical matrix theory? At 02:16 PM 7/28/2015, Mike Stay wrote:
It's called "blinding" in multiparty secure computation protocols.
On Tue, Jul 28, 2015 at 1:57 PM, Henry Baker <hbaker1@pipeline.com> wrote:
There have been proposals down through the years to incorporate checking logic on ALU's to make sure that the calculations were correct. I believe that IBM made some progress along these lines back in the 1960's.
More recently, timing/power/radio/sound emission attacks on encryption have led to computations in which random numbers are inserted early in the calculation, only to later drop out, in order to mask the side-channel information of the actual calculation being performed.
I don't know the precise name for these "masking numbers & calculations", but they fall short of fully homomorphic encryption, which would allow an *arbitrary* but unknown calculation to be performed.
There are also worries of compromised hardware which looks for *specific constants* being used as operands, and upon encountering such operands, the hardware squirrels away some of the other -- presumably private -- data for illicit use.
Some forms of "masked arithmetic" could also reduce the capabilities of this type of compromised hardware.
Are there any papers which develop a theory of "masked arithmetic" which would be substantially simpler (& more efficient) than fully homomorphic encryption?
On Tue, Jul 28, 2015 at 3:20 PM, Henry Baker <hbaker1@pipeline.com> wrote:
"Blinding", thanks!
Is there an algebraic theory of blinding that is sufficiently general to cover both crypto blinding and things like "pre-conditioning" in numerical matrix theory?
Blinding and preconditioning serve different purposes, so it would have to be a pretty general idea. The preconditioning sounds very similar to things like randomizing a list before quicksorting it or randomizing a hashtable's hash algorithm to prevent worst-case behaviors. http://web.archive.org/web/20070202204633/www.cs.rice.edu/~scrosby/hash/Cros... Random inputs are also useful for certain kinds of proof: http://www.scholarpedia.org/article/Applications_of_algorithmic_information_... Theorem [Chaitin-Schwartz, 1978] For almost all inputs, the probabilistic primality test is error-free in case it uses a long enough algorithmically random inputs. Theorem [Calude-Zimand, 1984] For almost all inputs, every “decent” Monte Carlo simulation algorithm is error-free in case it uses a long enough algorithmically random inputs. -- Mike Stay - metaweta@gmail.com http://www.cs.auckland.ac.nz/~mike http://reperiendi.wordpress.com
participants (2)
-
Henry Baker -
Mike Stay