Here's a real-world problem: You have some sort of IoT (Internet of Things) sensor -- perhaps a camera -- and you want to train an AI/machine learning algorithm to recognize something that is exposed to the camera. But perhaps you don't trust the AI/ML developer. So you send him/her only an encrypted dataset along with the classification data (yes/no or perhaps a finite set of possibilities); this classification data isn't encrypted, and there isn't any easy way to figure out from the sequence of classifications any useful info about the encrypted dataset. So far as I know, homomorphic encryption hasn't matured to the point where the entire training process could operate on homomorphically encrypted data. But we're not talking here about completely generic calculations -- we're talking about quite limited calculations, just in enormous quantities (10^18 calculations). Perhaps there are "homomorphic" encryption systems that do *just enough* and AI/ML systems that are dumbed down *just enough* that the two constraints can meet in the middle. After all, AI/ML systems don't seem to care about most kinds of image distortions, so perhaps they could still be capable of characterizing certain pictures even after encryption ? Obviously, if such things are possible, then there are clearly information leaks, but these might even be useful.