Much has been said and written about pixel size. There are some who advise 2-3 arcsec per pixel for D.S. imaging and there are advocates of <0.5 arcsec per pixel. These prescriptions are generalizations based on two important factors: 1) inherent image quality and 2) sampling of the inherent image.
A telescope produces a maximum resolution that is dependent on the aperture, type and quality of the optics. Additionally, the atmospheric seeing will limit achievable resolution. And for time exposures, the quality of the tracking will place further limits on the maximum achievable resolution. These resolution components approximately combine as a quadratic sum to yield the inherent image resolution:
image resolution = sqrt(scope^2 + tracking^2 + seeing^2)
Astronomical CCD resolution is commonly measured in FWHM (Full Width Half Max), which is defined as the diameter of a star disk measured at the radius where the photon intensity equals ˝ of the maximum intensity (located at the center of the star image). For many scopes and sites, seeing is the dominant factor and due to the quadratic equation, the image FWHM often approximates the seeing alone. This is particularly true for medium to long FL scopes when seeing is >3 arcsec (short focus scopes often have optical degradations on par with or greater than the seeing). For most non-professional sites, seeing averages about 3 arcsec, though typically there will be occasions when seeing <2 arcsec and other times it will deteriorate to >4 arcsec.
Most amateurs with {moderate to long focal length scope, good mount, and a normal site} should expect to normally achieve inherent resolution between 2.0 arcsec and 3.5 arcsec for time exposure deep space images.
A CCD “samples” the inherent image by collecting incoming photons into a sample grid. The grid is made up of little squares (or rectangles) called pixels. When a photon is collected by a pixel, information about the exact location of that photon is lost – all that can be known is the location of the pixel.
It is useful for the pixel size to be small enough to preserve the resolution of the inherent image. If the pixel size is larger than the resolution of the inherent image then the CCD image resolution will be inferior to that of another CCD with smaller pixels. On the other hand , if the pixels are made significantly smaller than the resolution of the inherent image, those smaller pixels do not improve the image resolution.
At first glance it might seem obvious that the best approach is to use the smallest pixels possible. But there are some unfortunate penalties for using too-small pixels. For a given camera there usually is a trade-off between angular pixel size and image field size. Using a focal extender to reduce the angular coverage of pixels will also reduce the total sky area that the CCD captures. Also, each pixel generates a small amount of electronic noise so using more pixels than is necessary to capture an object results in unnecessary noise. Thus it is important to balance the need to fine sample against the needs of field coverage and low noise.
The early CCDs had low pixel counts and thus field coverage was a major issue. That’s why prescriptions of 2-3 arcsec per pixel were common (there was also much less experience in coaxing high performance from scopes). But today’s mega-pixel CCDs are large detectors that can exceed the size of the scope’s corrected field. And they are large enough that their coverage often exceeds the extents of most astronomical targets. Thus the field of coverage issue is generally less important than it was in the past.
Also, early CCDs had higher electronic noise than today’s CCDs. The typical modern CCD has electronic noise that is usually insignificant compared to the noise of sky-glow or the inherent noise of the object itself. So for modern CCDs the penalties of fine sampling are not large.
When speaking of optimal sampling it is necessary to ask what is being optimized? There are at least 4 distinct issues:
1) Resolution
2) Field of view
3) Signal to noise of dim objects
4) Signal to noise of bright objects
Of course, it can be desirable to optimize several or all of these. Unfortunately, each issue has a different optimal sampling criterion. So it is useful to know and evaluate each issue of importance to choose an optimal pixel size. Though , in practice there are often only a few reasonable few pixel sizes afforded by existing equipment.
Optimizing for resolution is discussed below.
Optimizing for field of view is fairly straightforward. Simply sample at whatever scale is necessary to obtain the desired field. You can use the CCD_topics plate scale calculator to compute the necessary pixel size and focal length for any given field of view.
The two signal to noise (S/N) issues are complex and will be discussed at length here in the not too distant future. Suffice it to say for now:
Optimizing S/N for dim objects benefits from using a certain number of pixels per object and this number is generally below the minimum number of pixels needed to retain intrinsic resolution. Thus limiting magnitude may benefit from “under-sampling”.
Optimizing S/N for bright objects usually benefits from a certain number of pixels per object and this number is generally above the minimum number of pixels needed to retain intrinsic resolution. Thus “over-sampling” may improve photometric precision on bright objects.
There is a long-standing controversy in amateur circles as to the minimum sample that preserves resolution. The Nyquist criterion of 2 is often cited and applied as critical sample = 2*FWHM. But that Nyquist criterion is specific to the minimum sample necessary to capture and reconstruct an audio sine wave. The more general solution of the Nyquist criterion is the width (standard deviation) of the function, which for Gaussian is FWHM = 2.355 pixels. But this criterion is measured across a single axis. To measure resolution across the diagonal of square pixels it is necessary to multiply that value by sqrt(2), which yields a critical sampling frequency of FWHM = 3.33 pixels.
Note that there are some considerations that favor a sample rate above the Gaussian critical value of FWHM = 3.33 pixels. To begin with, few scopes actually produce a pure Gaussian PSF so it may be wise to leave some headroom, especially if you plan to employ deconvolution (which needs detailed information about the PSF). Another consideration is that most CCD images are re-sampled in order to align them for combining. Such resampling usually results in blurring, which will destroy information if the sample rate is already at a minimum.
My experience has repeatedly confirmed that a sampling FWHM = 3.5 pixels (or more) is optimal for high-resolution images. Furthermore I recently completed a rigorous experimental examination of this issue, which repeatedly confirmed this criterion (reported below).
It may seem that optimal sampling for resolution has no upper limit, as finer sampling does no harm to the intrinsic resolution. But resolution is also constrained by S/N because details dissolve in poor S/N and thus there is some reason to avoid excessive over-sampling. (Topic for future exploration).
I created a “virtual sky, telescope and camera” that produces Gaussian convolved synthetic FITS images. This was done with software via random number generator and the Box-Muller transformation. The images are built one virtual photon/electron at a time and as a consequence have true Poisson noise characteristics. This software allows me to generate precisely controlled images that have all of the important characteristics of real world images. The permutations for interesting tests are truly numerous and only one typical example is presented here to illustrate the sampling issue. I investigated several permutation but they all presented basically the same results (e.g. low S/N nebula and stars suffer even more from under-sampling, but not dramatically so).
Image 1:
A synthetic image consisting of artificial nebula and double stars. These objects were convolved via Box-Muller to simulate the Gaussian blur of atmospheric seeing and so on. The sample rate of this image is FWHM = 6 pixels. The virtual readout noise =12e-. Virtual gain = 1.0. The virtual background flux = 1000 e-.
Image 2:
I created an identical image, sampled at 3*FWHM by binning the above image 2x2.
Image 3:
I created an identical image, sampled at 1.5*FWHM by binning the above image 2x2.
You may download the images: resolution_test.zip
Data that has been sampled near the appropriate Nyquist criterion is in essence a "compressed" version. It is usually desirable to reconstruct the data from the compressed samples. These images were reconstructed via bi-cubic b-spline, which is commonly used for this purpose. Note that there are many different methods for resizing images and some of them will additionally sharpen the resized image (e.g. Photoshop), which could produce misleading results when comparing to an unsharpened image.
The images below are magnified by 3x to aid viewing (i.e. a single pixel in the source image is expanded to 3x3 pixels here).
Image 1: fwhm = 6 pixels
Image 2: fwhm = 3
pixels, bi-cubic resize 2x
Image 3: fwhm =
1.5 pixels bicubic resize 4x
Conclusions:
As theory predicts, there is actually slight resolution degradation when FWHM = 3.0 pixels and significant degradation when FWHM = 1.5 pixels.
As discussed above, typical amateur equipment and conditions can be expected to produce images with FWHM between 2.0 and 3.5 arcsec. Thus a pixel size of 0.5 to 1.5 arcsec can be considered ideal for optimizing resolution.
© Stan Moore 2004