Re: [math-fun] Science of electronic cameras
There are (at least) two things going on here, one classical, one quantum. The classical issue is standard Shannon sampling theory. There is a tradeoff between allocating bits for additional samples v. allocating bits _per_ sample. E.g., one can utilize "halftone"/dithering techniques to utilize additional samples to make up for poor dynamic range (SNR or bits per sample) -- one can recover the dynamic range by a low-pass filter that throws away the spurious higher frequencies induced by the halftone/dithering process. However, halftone/dithering techniques _are not an "efficient" mechanism for simulating a higher number of bits per pixel_. Take a simple binary half-tone image which is double the resolution in both X and Y axes. Each "superpixel" is a 2x2 array of bits which can represent tones with a density of 0-4. But if we were to utilize those same 4 bits to represent tones directly (using binary coding), we could represent tones in the range 0-15 -- a much larger dynamic range. So if we care about dynamic range, we're better off with fewer pixels which are much, much deeper (more SNR). The quantum issue is simply the quantization noise of having to deal with discrete photons. The human eye is sensitive enough to distinguish individual photons, and certain "cooled" cameras can also get to this level of SNR. But even the run-of-the-mill digital camera, which may require tens of photons per quantization level in the sensor, has some amount of noise resulting from having to average over too few quanta. When these pixels are smaller, the same number of quanta are distributed over more of them, leading to more noise in each pixel. Yes, assuming that the quantum "capture rate" doesn't change when going to smaller pixels, there should be _no loss_ and _no difference_, because the count in the various pixels can always be added back together to emulate the count that the larger pixels would have seen. In reality, however, the capture % goes down slightly with the higher resolution, but this effect is small relative to the classical issue discussed above. There is one other issue regarding digital camera sensors today -- handling color. The typical digital sensor utilizes a 2x2 "Bayer" (not aspirin!) pattern of color sensors (or one of its obvious permutations): GR BG Here, R is a red sensor, B is a blue sensor, and G is a green sensor. This type of sensor _immediately_ throws away a good fraction of the incoming photons, because a blue photon landing on the red sensor will not be seen, because the blue sensor is sensitive only to blue photons. The rationale for two green G sensors out of four is that the human eye is most sensitive to green, and therefore a sensor which emulates the sensitivity of the human eye should try to capture more green photons. One recent alternative to the Bayer pattern utilizes two _different_ G color sensors: G1 and G2, which respond to different colors of green. While losing somewhat in sensitivity, such a sensor can improve slightly in color fidelity. Modern cameras utilize clever 2D filtering techniques to try to extract a slightly higher amount of spatial resolution (at least for the greyscale spectrum of the image), and such clever techniques are required to _re-register_ the colors, since the different co lors are sampled at slightly different spatial locations. Without such registration, you can get color halos around every object. The major alternative to Bayer pattern sensors is a sensor with _stacked_ colors--e.g., Foveon. Since longer wavelengths penetrate further into the surface of the silicon chip, the red sensor can be _underneath_ the green sensor, which is _underneath_ the blue sensor. In this way, a photon is much more likely to get recorded, because no matter what its color, it is always falling on fertile ground. Unfortunately, such sensors have not yet been able to compete in cost and/or quality with the more traditional Bayer sensors. At 01:45 AM 11/13/2008, Dan Asimov wrote:
Please allow me to ask this somewhat off-topic question:
The NY Times has an article on why smaller-pixel cameras can be worse than larger-pixel ones. It's at
< http://www.nytimes.com/2008/11/13/technology/personaltech/13basics.html >.
The following passage strikes me as containing a number of dubious assertions.
<< Photons (light particles) pass through a cameraâÂÂs lens and are captured by the cups in the tray. Each cup is either red, green or blue (the three colors that are the building blocks for all other colors). The more photons a cup catches, the brighter that cupâÂÂs color. Totally empty cups record black; totally full cups record white. Larger pixels (cups, remember), with larger surface areas, capture more photons per second, which in electronics-speak means a stronger signal  and in camera-speak means less noise and cleanner colors. Bigger pixels can also capture more photons per exposure without filling up, so larger pixels hold on to their color longer and donâÂÂt go white as quickly as smaller pixels. >>
It seems to me that any advantage conferred by larger pixels' ability to capture more photons per second is exactly nullified by their having to do so simply by virtue of their size. If the same fraction of the image plane could be covered by pixels having lower resolution than the human retina, it seems to me that making them smaller would be an advantage. If smaller pixels are not an advantage, I'd expect this to be because they require that a higher fraction of the image plane be covered by interstitial material (and so is not used to sense the image); at some point this would become noticeable. But I don't really know about electronic cameras. Are there any technoids out there who can evaluate the Times's argument?
--Dan
November 13, 2008 Basics
Pixels Are Like Cupcakes. Let Me Explain.
By RUSS JUSKALIAN
IT happens to all of us: the moment when one finds out that more megapixels and better photographs arenÂt always the same thing. To be disabused of the Megapixel Myth  this decadeÂs analog of the Megahertz Myth  can lead to an existential buyerÂs crisis in miniature.
Disbelief, at first, gives way to a sort of embarrassing self-questioning: You mean, 15 megapixels isnÂt three times better than 5 megapixels? This yearÂs model isnÂt better than last yearÂs? I spent all that money upgrading  for nothing?
The panicky consumer is then faced with the choice of dumping digital electronics and becoming a Luddite, or learning about camera technology and taking control of purchasing decisions.
Upon pursuing this latter path, one soon realizes that all is not lost. Newer generations of digital cameras and camcorders, which almost always have more megapixels or higher resolutions, still tend to produce great output.
But there is more to a digital cameraÂs sensor than resolution. Understanding some of the basics may just convince you that, at least this year, buying last yearÂs model is a smart move.
Focusing on the Right Numbers
In a sea of specifications, one of the most overlooked is the size, not the number, of pixels on a cameraÂs sensor. Bigger sensors usually mean bigger pixels, which provides some advantages when it comes to making an image.
The mechanics of this can be understood by thinking of a digital camera sensor as a flat sheet of material pocked with millions (hence ÂmegaÂ) of cylindrical, cuplike pixels. In other words, picture the digital sensor as a tiny cupcake tin.
Photons (light particles) pass through a cameraÂs lens and are captured by the cups in the tray. Each cup is either red, green or blue (the three colors that are the building blocks for all other colors). The more photons a cup catches, the brighter that cupÂs color. Totally empty cups record black; totally full cups record white.
Larger pixels (cups, remember), with larger surface areas, capture more photons per second, which in electronics-speak means a stronger signal  and in camera-speak means less noise and cleaner colors. Bigger pixels can also capture more photons per exposure without filling up, so larger pixels hold on to their color longer and donÂt go white as quickly as smaller pixels.
Since sensor sizes in compact cameras havenÂt gotten much bigger, but their megapixel count has, increasing the number of pixels can be accomplished only by using smaller pixels. For this reason, itÂs often not worth paying extra for the newest megapixel champion, says Phil Askey, editor of dpreview.com.
ÂOnce you get beyond seven or eight megapixels in a compact point-and-shoot camera, the small lenses are struggling to keep up, Mr. Askey said. ÂAnd youÂre cramming so many pixels in such a small sensor that noise is becoming a real issue. We started worrying about this back in 2006, but itÂs only gotten worse.Â
The same thing is true for digital single-lens reflex cameras. In fact, recent tests conducted at dpreview.com concluded that the new 15-megapixel Canon EOS 50D ($1,400) Âshows visibly more chroma and luminance noise, and slightly less dynamic range, than the older 10-megapixel Canon EOS 40D ($920).
As a way to visualize just how densely packed sensors have become, Mr. AskeyÂs Web site provides pixel density and sensor-size data on more than 1,200 digital cameras. And while Mr. Askey cautions that buyers shouldnÂt make decisions based on a single number, those data can help put a purchase in perspective alongside more comprehensive reviews of image quality.
So if youÂre in the market for a Âpro-sumer D.S.L.R. (a consumer camera with the quality and features of a professional model) that minimizes noise issues, take a look at the Canon Rebel XSi ($600), Canon 40D ($920), Nikon D80 ($640), and Nikon D90 ($1,000).
Tapping Your Inner Pro
Another advantage of a larger sensor is the ability to produce images where only a relatively small portion of the subject is in focus. Completely understanding how this works may require a degree in physics, but in general, cameras with small sensors tend to produce images where almost everything appears to be in focus.
This is the main reason that, in normal shooting situations, images produced by small point-and-shoot cameras and D.S.L.R.Âs look so distinct. In digital video, the result of using small sensors is sometimes referred to as the Âvideo look.Â
The bad news is that youÂll probably need to use a D.S.L.R. to produce a really shallow depth of field. The good news is you can achieve that professional look with the cheapest of entry-level D.S.L.R.Âs, which are also relatively small.
(The only compact point-and-shoot options are the Sigma DP-1 for $700, which The New York Times consumer technology columnist David Pogue praised for its image quality but panned on all other counts; the new Panasonic DMC-G1, which Mr. Pogue had similarly mixed feelings about; and the newly announced, but untested, Sigma DP-2.)
If youÂre looking for a smallish camera that can achieve shallow-depth-of-field images, good deals include the Canon Rebel XS (around $510 with lens), Nikon D40/D40X (around $450 with lens), and Olympus E-420 (around $460 with lens).
Skill Still Matters
Though some experts say they believe that improvement has slowed in digital imaging, itÂs always wise to remember that with technology, todayÂs rules are tomorrowÂs anachronisms.
But no matter when the next advance in digital imaging comes, the old saying that the photographer is the most important part of a good photo will still hold true.
Just consider Alex Majoli, an award-winning Magnum photographer, who is known for shooting images of war and other dramatic scenes for publications like National Geographic and Newsweek  with compact point-and-shoot digital cameras.
Or consider the more critical words of Ansel Adams.
ÂThe sheer ease with which we can produce a superficial image, Mr. Adams once wrote, Âoften leads to creative disaster.Â
Another major issue is the physical size of the sensor. A given area of silicon has a limited capacity to hold electrons (resulting from photon sensing). With small sensors, this maximum capacity limits the dynamic range of the sensor, leading to larger noise. This is why (especially in the video world) the size of the sensor is an important consideration. The same size sensor divided up into smaller pieces increases resolution, but also the noise in each pixel, determined by the pixel electron capacity. It's worse, since the overhead surrounding the pixel further reduces the photon efficiency and pixel capacity. This is somewhat offset by lenslet arrays focusing light away from the borders and onto the sensor, but this does not increase the pixel electron capacity. There are also diffraction effects that come into play when the sensor size gets sufficiently small. There is a reason for the full-frame CCD imagers in the high end Canon and Nikon cameras. On Nov 13, 2008, at 10:13 AM, Henry Baker wrote:
There are (at least) two things going on here, one classical, one quantum.
The classical issue is standard Shannon sampling theory. There is a tradeoff between allocating bits for additional samples v. allocating bits _per_ sample. E.g., one can utilize "halftone"/dithering techniques to utilize additional samples to make up for poor dynamic range (SNR or bits per sample) -- one can recover the dynamic range by a low-pass filter that throws away the spurious higher frequencies induced by the halftone/dithering process.
However, halftone/dithering techniques _are not an "efficient" mechanism for simulating a higher number of bits per pixel_. Take a simple binary half-tone image which is double the resolution in both X and Y axes. Each "superpixel" is a 2x2 array of bits which can represent tones with a density of 0-4. But if we were to utilize those same 4 bits to represent tones directly (using binary coding), we could represent tones in the range 0-15 -- a much larger dynamic range. So if we care about dynamic range, we're better off with fewer pixels which are much, much deeper (more SNR).
The quantum issue is simply the quantization noise of having to deal with discrete photons. The human eye is sensitive enough to distinguish individual photons, and certain "cooled" cameras can also get to this level of SNR. But even the run-of-the-mill digital camera, which may require tens of photons per quantization level in the sensor, has some amount of noise resulting from having to average over too few quanta. When these pixels are smaller, the same number of quanta are distributed over more of them, leading to more noise in each pixel. Yes, assuming that the quantum "capture rate" doesn't change when going to smaller pixels, there should be _no loss_ and _no difference_, because the count in the various pixels can always be added back together to emulate the count that the larger pixels would have seen. In reality, however, the capture % goes down slightly with the higher resolution, but this effect is small relative to the classical issue discussed above.
There is one other issue regarding digital camera sensors today -- handling color. The typical digital sensor utilizes a 2x2 "Bayer" (not aspirin!) pattern of color sensors (or one of its obvious permutations):
GR BG
Here, R is a red sensor, B is a blue sensor, and G is a green sensor. This type of sensor _immediately_ throws away a good fraction of the incoming photons, because a blue photon landing on the red sensor will not be seen, because the blue sensor is sensitive only to blue photons. The rationale for two green G sensors out of four is that the human eye is most sensitive to green, and therefore a sensor which emulates the sensitivity of the human eye should try to capture more green photons. One recent alternative to the Bayer pattern utilizes two _different_ G color sensors: G1 and G2, which respond to different colors of green. While losing somewhat in sensitivity, such a sensor can improve slightly in color fidelity. Modern cameras utilize clever 2D filtering techniques to try to extract a slightly higher amount of spatial resolution (at least for the greyscale spectrum of the image), and such clever techniques are required to _re-register_ the colors, since the different co lors are sampled at slightly different spatial locations. Without such registration, you can get color halos around every object.
The major alternative to Bayer pattern sensors is a sensor with _stacked_ colors--e.g., Foveon. Since longer wavelengths penetrate further into the surface of the silicon chip, the red sensor can be _underneath_ the green sensor, which is _underneath_ the blue sensor. In this way, a photon is much more likely to get recorded, because no matter what its color, it is always falling on fertile ground. Unfortunately, such sensors have not yet been able to compete in cost and/or quality with the more traditional Bayer sensors.
At 01:45 AM 11/13/2008, Dan Asimov wrote:
Please allow me to ask this somewhat off-topic question:
The NY Times has an article on why smaller-pixel cameras can be worse than larger-pixel ones. It's at
< http://www.nytimes.com/2008/11/13/technology/personaltech/ 13basics.html >.
The following passage strikes me as containing a number of dubious assertions.
<< Photons (light particles) pass through a camera’s lens and are captured by the cups in the tray. Each cup is either red, green or blue (the three colors that are the building blocks for all other colors). The more photons a cup catches, the brighter that cup’s color. Totally empty cups record black; totally full cups record white. Larger pixels (cups, remember), with larger surface areas, capture more photons per second, which in electronics-speak means a stronger signal — and in camera-speak means less noise and cleanner colors. Bigger pixels can also capture more photons per exposure without filling up, so larger pixels hold on to their color longer and don’t go white as quickly as smaller pixels. >>
It seems to me that any advantage conferred by larger pixels' ability to capture more photons per second is exactly nullified by their having to do so simply by virtue of their size. If the same fraction of the image plane could be covered by pixels having lower resolution than the human retina, it seems to me that making them smaller would be an advantage. If smaller pixels are not an advantage, I'd expect this to be because they require that a higher fraction of the image plane be covered by interstitial material (and so is not used to sense the image); at some point this would become noticeable. But I don't really know about electronic cameras. Are there any technoids out there who can evaluate the Times's argument?
--Dan
November 13, 2008 Basics
Pixels Are Like Cupcakes. Let Me Explain.
By RUSS JUSKALIAN
IT happens to all of us: the moment when one finds out that more megapixels and better photographs aren’t always the same thing. To be disabused of the Megapixel Myth this decade’s analog of the Megahertz Myth can lead to an existential buyer’s crisis in miniature.
Disbelief, at first, gives way to a sort of embarrassing self-questioning: You mean, 15 megapixels isn’t three times better than 5 megapixels? This year’s model isn’t better than last year’s? I spent all that money upgrading for nothing?
The panicky consumer is then faced with the choice of dumping digital electronics and becoming a Luddite, or learning about camera technology and taking control of purchasing decisions.
Upon pursuing this latter path, one soon realizes that all is not lost. Newer generations of digital cameras and camcorders, which almost always have more megapixels or higher resolutions, still tend to produce great output.
But there is more to a digital camera’s sensor than resolution. Understanding some of the basics may just convince you that, at least this year, buying last year’s model is a smart move.
Focusing on the Right Numbers
In a sea of specifications, one of the most overlooked is the size, not the number, of pixels on a camera’s sensor. Bigger sensors usually mean bigger pixels, which provides some advantages when it comes to making an image.
The mechanics of this can be understood by thinking of a digital camera sensor as a flat sheet of material pocked with millions (hence “mega”) of cylindrical, cuplike pixels. In other words, picture the digital sensor as a tiny cupcake tin.
Photons (light particles) pass through a camera’s lens and are captured by the cups in the tray. Each cup is either red, green or blue (the three colors that are the building blocks for all other colors). The more photons a cup catches, the brighter that cup’s color. Totally empty cups record black; totally full cups record white.
Larger pixels (cups, remember), with larger surface areas, capture more photons per second, which in electronics-speak means a stronger signal and in camera-speak means less noise and cleaner colors. Bigger pixels can also capture more photons per exposure without filling up, so larger pixels hold on to their color longer and don’t go white as quickly as smaller pixels.
Since sensor sizes in compact cameras haven’t gotten much bigger, but their megapixel count has, increasing the number of pixels can be accomplished only by using smaller pixels. For this reason, it’s often not worth paying extra for the newest megapixel champion, says Phil Askey, editor of dpreview.com.
“Once you get beyond seven or eight megapixels in a compact point-and-shoot camera, the small lenses are struggling to keep up,” Mr. Askey said. “And you’re cramming so many pixels in such a small sensor that noise is becoming a real issue. We started worrying about this back in 2006, but it’s only gotten worse.”
The same thing is true for digital single-lens reflex cameras. In fact, recent tests conducted at dpreview.com concluded that the new 15-megapixel Canon EOS 50D ($1,400) “shows visibly more chroma and luminance noise,” and slightly less dynamic range, than the older 10-megapixel Canon EOS 40D ($920).
As a way to visualize just how densely packed sensors have become, Mr. Askey’s Web site provides pixel density and sensor-size data on more than 1,200 digital cameras. And while Mr. Askey cautions that buyers shouldn’t make decisions based on a single number, those data can help put a purchase in perspective alongside more comprehensive reviews of image quality.
So if you’re in the market for a “pro-sumer” D.S.L.R. (a consumer camera with the quality and features of a professional model) that minimizes noise issues, take a look at the Canon Rebel XSi ($600), Canon 40D ($920), Nikon D80 ($640), and Nikon D90 ($1,000).
Tapping Your Inner Pro
Another advantage of a larger sensor is the ability to produce images where only a relatively small portion of the subject is in focus. Completely understanding how this works may require a degree in physics, but in general, cameras with small sensors tend to produce images where almost everything appears to be in focus.
This is the main reason that, in normal shooting situations, images produced by small point-and-shoot cameras and D.S.L.R.’s look so distinct. In digital video, the result of using small sensors is sometimes referred to as the “video look.”
The bad news is that you’ll probably need to use a D.S.L.R. to produce a really shallow depth of field. The good news is you can achieve that professional look with the cheapest of entry-level D.S.L.R.’s, which are also relatively small.
(The only compact point-and-shoot options are the Sigma DP-1 for $700, which The New York Times consumer technology columnist David Pogue praised for its image quality but panned on all other counts; the new Panasonic DMC-G1, which Mr. Pogue had similarly mixed feelings about; and the newly announced, but untested, Sigma DP-2.)
If you’re looking for a smallish camera that can achieve shallow-depth-of-field images, good deals include the Canon Rebel XS (around $510 with lens), Nikon D40/D40X (around $450 with lens), and Olympus E-420 (around $460 with lens).
Skill Still Matters
Though some experts say they believe that improvement has slowed in digital imaging, it’s always wise to remember that with technology, today’s rules are tomorrow’s anachronisms.
But no matter when the next advance in digital imaging comes, the old saying that the photographer is the most important part of a good photo will still hold true.
Just consider Alex Majoli, an award-winning Magnum photographer, who is known for shooting images of war and other dramatic scenes for publications like National Geographic and Newsweek with compact point-and-shoot digital cameras.
Or consider the more critical words of Ansel Adams.
“The sheer ease with which we can produce a superficial image,” Mr. Adams once wrote, “often leads to creative disaster.”
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (2)
-
Henry Baker -
Tom Knight