Hi Patrick, Ok, some basics. Since we humans have been pre-conditioned to see in a particular spectral band of the E-M spectrum, ie., everything we see with our eyes has been illuminated by the light from a G2 type star, the usual approach is to do a color balance of images to reach a "pleasing" image as if was illuminated by a G2 type source. A G2 star should be "white". Now to reach this balance the best thing is to take a series of images through your filters at a G2 star and then determine the ratio of exposures for each filter which leads to a "white image of the G2 star. Using the inverse of these ratios referenced to one of them, usually the red filter leads to the correct exposure time for each filter. But wait there's more. There must be sufficient signal of the object being imaged to attain an image that isn't overrun by noise. So this means that the filter that produces the least signal must be exposed sufficiently to attain the detail needed for the final composite image. Many use stacking and dithering to attain this signal and to also average out the noise in the individual images. However be careful as stacking only works if there is sufficient signal to add that will arise above the noise. Remember the signal increases as the number of images increases but the noise only increases as the square root of the number of images. But you gotta have signal! So where does this lead in the real world. Looking at the response curves of the combined CCD and RBG filters, there will be more signal in Red than in Green and than in Blue. The ratios go back to the G2 images and you must do the math to work out the precise numbers, or use AIP4WIN which has a nifty calculator. Getting back to our eyes however, it turns out that our eyes are much less sensitive in resolution in color than in illumination. So this leads to LRGB images in which an object is exposed at as high a resolution (1x1 binning) as possible in "luminance" (no filter) and lower resolution (2x2 binning) in the RGB filters. Bringing all of the resolutions to the same level (1x1) and then combining them leads to a pleasing image to our eyes. The 2x2 binning also helps with the exposure times through the filters. Narrow band filters are handled much the same however one must "assign" particular visual colors to each filter pass band. Think of radio telescope images, the composite image has had the particular pass band assigned to a particular visual color. In the final analysis the "beauty is in the eye of the beholder". Pretty astronomical images are really art and not science so working with the individual images to reach a pleasing composite is really the work of the artist. Consider the differences between observing the Giant Red spot on Jupiter and images taken with space craft. Jupiter has been illuminated equally by our Sun and the light reflected has come through empty space with only a small amount of attenuation by our atmosphere, yet the Red Spot is much better defined after the images have been enhanced by the "artist" (NASA). Jerry Foote ScopeCraft, Inc. 4175 E. Red Cliffs Dr. Kanab, UT 84741 435-216-5450 jfoote@scopecraft.com
Thanks Jerry. Too bad you don't live up this way or you could teach an imaging class at SPOC. Looking at what you said (and have said) plus seeing the work others have put into "pretty pictures" I'm feeling a need to paraphrase an old ad slogan: "When you care enough to do your very best." I guess when it comes down to it I've something of a choice. Either I care enough to do my very best at taking data or I care enough to make pretty pictures. Data wins. Yeah, I'll still probably dabble in the pretty picture arena from time to time. But I just don't care enough to do it the very best I can. Just too much work for something that really wont contribute anything. Well, speaking of data, my latest run just finished so I'd best get to processing. Clear skies! patrick p.s. I'm still looking to fly down you way one day soon.
participants (2)
-
Jerry Foote -
Patrick Wiggins