Re: [math-fun] Structured light (was Single pixel cameras)
[from Bill Ackerman -- Rich] Date: Fri, 30 Mar 2018 00:53:11 -0400 Subject: Re: [math-fun] Structured light (was Single pixel cameras) From: wba <wbackerman@gmail.com> Actually, the "P4" phosphor of old televisions was extremely fast. Much faster than the "P1" phosphor of oscilloscope tubes, for example. It was so fast that, if you took a picture with a sufficiently fast shutter, you would see only a single dot. When you watched TV, your eyes were seeing nothing but a moving dot of light. This is the principle of the "flying spot scanner", which was a "poor man's TV camera", back when actual image orthicons cost over $1000 in 1960 dollars. You arrange for the raster of a TV (with no modulation on the signal) to go through a transparency or equivalent arrangement, into a photocell. There was a cute instance of this, for fairly sophisticated electronics hobbyists. One can scan, for example, 35mm slides (the way serious photography enthusiasts took their pictures back then) by wiring up a slide projector with a good-quality photocell where the projection lamp should be, and aim and focus the projector at the TV, with its video modulation shut off. Use the slide projector mechanism in the usual way, send the output to your ham radio 450MHz transmitter or whatever, and bore your ham radio friends with pictures of your vacation, the same way you do when you invite them over. ---- On 03/30/2018 12:21 AM, Keith F. Lynch wrote: Henry Baker <hbaker1@pipeline.com> wrote: A classical (1930's-1990's) TV camera *raster scans* an image (which hopefully remains still for the duration of the scan) with a single point spot, so one could -- at least in theory -- dispense with all of the lenses, stage lights, etc., and simply have a single-pixel sensor which receives light that was produced by a laser beam producing a single point scanned over the scene whose image is being captured. I came up with almost the same idea decades ago. My idea was a "poor man's video camera." A black and white TV would be adjusted to display a bright blank raster. A lens would be used to project this raster onto the scene you want to get a video image of. The signal would be picked up by a photocell. I realized that the image would look as if the camera was where the projecting lens was and as if the light source was where the photocell was. You could aim the photocell at the ceiling to get more uniform illumination rather than a spotlight effect. You could have multiple photocells with color filters in front of them to get a color image. You'd need to exclude light from all other sources, as it would tend to wash out the image, reducing the contrast. I later realized that the image quality would be poor due to the slow phosphor on the TV. But I'm sure it would have worked. I wonder if anyone ever did it. I'm curious how you'd scan the laser over the scene. Mechanically? For a color image you'd need at least three lasers, one for each primary color. I've given a lot of thought to what would be possible with what I call structured light, light whose properties (wavelength, polarization, duration, direction, etc.) are carefully controlled, together with detectors that are sensitive to those properties. It might be possible to replace x-ray machines for subjects that aren't totally opaque to visible light. Generate a very brief (picosecond) flash of light on one side of the subject, and watch for just the *first* light to pass through the subject. That would be light that took a direct path rather than being scattered. A variation of that is to take the difference between the brightness from each direction light that's reflected after slightly different periods of time. For instance if someone is buried in an avalanche, it should be possible to look successively deeper through the snow, and notice a difference when the time allowed is just barely long enough for the pulse of light to reach the subject before being reflected back. Since the pulse would be very brief and a very narrow wavelength, background light wouldn't interfere significantly, unless you were looking through something that was almost opaque. Another idea that I had years ago was that it would be possible to reconstruct what's on a video screen if you can see a surface that's illuminated by the screen. Someone eventually did it; see https://dl.acm.org/citation.cfm?id=830537 Then there's the laser microphone, a way of listening to whatever you can illuminate with a laser. And you can read all the data being transmitted over a modem if you have a good view of the modem lights -- or even of a surface that's illuminated by them. And there's a way of seeing around corners using lasers: https://www.youtube.com/watch?v=JWDocXPy-iQ With some effort, it ought to be possible to play a CD or DVD without taking it out of its box, even without taking its box down from the shelf. Or to read closed books on a shelf. Party trick: Consider a set of six wavelengths, red wavelengths R1 and R2 which are indistinguishable to the eye, green wavelengths G1 and G2 which are indistinguishable to the eye, and blue wavelengths B1 and B2 which are indistinguishable to the eye. Consider a pigment or dye which reflects R1, G1, and B1, but not R2, G2, or B2. By switching the LEDs illuminating a room between R1 and R2, between G1 and G2, and between B1 and B2, you can make a surface covered with that pigment or dye any color you like. Wearing what looks like an ordinary cotton t-shirt which keeps changing color will make you the hit of the party. Ultimately, if you can capture and reproduce the instantaneous amplitude of light with a space resolution small compared to the wavelength of light and a time resolution short compared to the frequency of light, it should be possible to re-create any visual experience whatsoever, including a "TV screen" you can use a telescope, spectroscope, or microscope on, or an invisibility cloak. Of course this would require processors millions of times faster than today's. Good luck hiding the waste heat. Of course, there's nothing special about the raster scanning pattern; indeed, *any* pattern which covers the image will work, so long as the reconstruction laser follows the same pattern. I believe that some of the alternatives to 1920's and 1930's TV systems used other nonraster scanning schemes. Yes. Mechanical systems work better with smooth curves than with the sawtooth waveform used with electronic TVs. But wait -- there's more! There's also nothing special about using a *point* of light! One could illuminate the picture to be scanned with a long sequence of random 2D patterns of light, with the single-pixel sensor converting its *average* taken over the entire scene into a time-varying signal whose next value would be the *average over the entire scene* of the light pattern reflected from the next random pattern, and so on. For greatest efficiency, the 2D patterns should be Costas arrays. If I recall correctly, an image from a multiple-pinhole camera can be disambiguated only if the pinholes form a Costas array.
participants (1)
-
rcs@xmission.com