One could utilize the random 2D patterns as a form of encryption: Both the sender and receiver have the same complete set of patterns. In fact, since we're simply *accumulating* the results of all of the patterns for a single *still* image, the *sequence* of patterns *doesn't matter* ! So, in effect, we're computing our image as a *linear combination* of the random 2D patterns, and transmitting only the coefficients of that linear combination. Suppose we have a hidef BW image of 1920x1080x8 bits, then we have 1920*1080 2D patterns, each of 8 bits, or (1920*1080)^2 ~ 4TBytes of shared secret. I'd call this encryption scheme "Plato's Cave", after his discussion of people who live in a cave and try to determine what's going on in the world based solely upon the changing shadows that they see on the walls of the cave. If the outside scene is illuminated by these random 2D light patterns, and if the people inside the cave are also privy to these 2D light patterns, then they should be able to reconstruct the outside scenes from a single pixel camera inside the cave. === Mary Lou Jepsen's company Openwater (www.openwater.cc) is developing an MRI-equivalent device made up of near-infrared displays, sensors and computers that can "see inside" the human body to exquisite resolutions -- i.e., individual neurons. Since infrared light can travel several inches into the body and scatter back out, she can capture a hologram of the scattered image, and then invert it computationally. She claims that Openwater's device will be much cheaper/smaller/ faster than those hospital MRI machines that cost $millions and kill people who walk by them carrying metals that can be attracted to magnets. At 10:55 PM 3/29/2018, Henry Baker wrote:
At 09:21 PM 3/29/2018, Keith F. Lynch wrote:
Henry Baker <hbaker1@pipeline.com> wrote:
There's also nothing special about using a *point* of light! One could illuminate the picture to be scanned with a long sequence of random 2D patterns of light, with the single-pixel sensor converting its *average* taken over the entire scene into a time-varying signal whose next value would be the *average over the entire scene* of the light pattern reflected from the next random pattern, and so on.
For greatest efficiency, the 2D patterns should be Costas arrays.
If I recall correctly, an image from a multiple-pinhole camera can be disambiguated only if the pinholes form a Costas array.
Actually, simple classical orthogonality works just fine.
If you have a rectangular array of pixels, the various arrays created by putting exactly one "1" into an array with all 0's, form a basis for the vector space. So a raster scan simply enumerates the basis vectors in a convenient ordering.
But there are lots of other bases, including those generated by the inverse Fourier transform of these raster scan bases.
So any set of patterns which are linearly independent can be utilized, hence the interest in random arrays, which are with very high probability independent.
You could do even better with Singular Value Decomposition of the image (as an approximation to the pixel array), but that would require a priori knowledge of the image(s) to be scanned.
BTW, standard MPEG encoding breaks an image into little 8x8 blocks, and MPEG approximates the DCT of these blocks, so if we were concerned only with an 8x8 image we could approximate it directly (and optically) by quickly running through ~64=8*8 special DCT-type patterns.