Astronomy

Algorithm to stack astronomical images

Algorithm to stack astronomical images


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm looking for a simple algorithm to compare astronomical images (of the same sky region) against each other, compute their movement and rotation, to finally stack them.

At the moment I'm already having a more or less working algorithm. First I extract all the stars out of an image (including information like brightness and FWHM), and then I walk through all the resulting "points" and create triangles out of the current point and those two other stars that have the shortest distance to this star.

This list of triangles is created for every image. After this I take one image as a reference and then I walk through the list of triangles in the reference image to find a triangle in the other image with the same length of each side of the triangle (I also "allow" some tolerance due minimal relative differences of the star positions in each image). For this matches I calculate movement and rotation relative to the reference image. The last step is to find the matched triangles that have to same relative movement and rotation like the other matches. This is done by calculating the standard deviation, sorting out triangles that are not within 1 or 2 sigma and repeat this process until I have a very small standard deviation.

The last part, finding "valid" triangles with the same movement/rotation, is working fine. The problem is that sometimes I have only like 2 or 3 "valid" triangles out of 300 initial triangles. All other triangles have side lengths different to those of the reference image.

So I assume it's the way I generate my initial triangles which causes the problem. Sortings stars by their brightness and using this data to generate the triangles also doesn't work. So is there a better way to create the initial triangles in all the images?


This page about a commercial product goes into some detail about their algorithm. It does the triangle matching you describe, with something like simulated annealing to get a more optimal solution.

The accepted answer to this closely related question recommends Hugin panorama software; it's open source, so you should be able to glean the algorithms used.


Have a look at SCAMP for astrometry and SWarp for stacking. Like the software mentioned in the other answer, both are open source, so you can check what algorithms they use.

SCAMP documentation is here, with an explanation of the algorithm in chapter 6.7 (page 25). There's also a short paper, but the manual seems more thorough.

Note that the software is written with wide-field multi-CCD mosaic detectors in mind, so what they do might be overkill for what you have in mind.


Stacking astronomy images with Python

I thought this was going to be easier but after a while I'm finally giving up on this, at least for a couple of hours.

I wanted to reproduce this a trailing stars image from a timelapse set of pictures. Inspired by this:

The original author used low resolution video frames taken with VirtualDub and combined with imageJ. I imagined I could easily reproduce this process but with a more memory-conscious approach with Python, so I could use the original high-resolution images for a better output.

My algorithm's idea is simple, merging two images at a time, and then iterating by merging the resulting image with the next image. This done some hundreds of times and properly weighing it so that every image has the same contribution to the final result.

I'm fairly new to python (and I'm no professional programmer, that'll be evident), but looking around it appears to me the Python Imaging Library is very standard, so I decided to use it (correct me if you think something else would be better).

This does what it's supposed to but the resulting image is dark, and if I simply try to enhance it, it's evident that information was lost due lack of depth in pixel's values. (I'm not sure what the proper term here is, color depth, color precision, pixel size). Here's the final result using low resolution images:

or one I was trying with the full 4k by 2k resolution (from another set of photos):


Algorithm to stack astronomical images - Astronomy

Image Stacking improves signal-to-noise ratio, but not all stacking methods are as effective. This article shows some differences and which ones are best and the data format to use.

  • 001) ETHICS in Night Photography
  • 002) Beginning Astrophotography: Star Trails to Nightscape Photography
  • 1a) Nightscape Photography with Digital Cameras
  • 1b) Planning Nightscape Photography
  • 1c) Characteristics of Best Digital Cameras and Lenses for Nightscape and Astro Photography
  • 1d) Recommended Digital Cameras and Lenses for Nightscape and Astro Photography
  • 1e) Nightscape Photography In The Field Setup
  • 1f) A Very Portable Astrophotography, Landscape and Wildlife Photography Setup
  • 2a) The Color of the Night Sky
  • 2b) The Color of Stars
  • 2c) The Color of Nebulae and Interstellar Dust in the Night Sky
  • 2d1) Verifying Natural Color in Night Sky Images and Understanding Good Versus Bad Post Processing
  • 2d2) Color Astrophotography and Critics
  • 2e) Verifying Natural Color Astrophotography Image Processing Work Flow with Light Pollution
  • 2f) True Color of the Trapezium in M42, The Great Nebula in Orion
  • 2g) The True Color of the Pleiades Nebulosity
  • 3a1) Nightscape and Astrophotography Image Processing Basic Work Flow
  • 3a2) Night Photography Image Processing, Best Settings and Tips
  • 3a3) Astrophotography Post Processing with RawTherapee
  • 3b) Astrophotography Image Processing
  • 3c) Astrophotography Image Processing with Light Pollution
  • 3d) Image Processing: Zeros are Valid Image Data
  • 3e) Image Processing: Stacking Methods Compared (YOU ARE HERE)
  • 3f1) Advanced Image Stretching with the rnc-color-stretch Algorithm
  • 3f2) Messier 8 and 20 Image Stretching with the rnc-color-stretch Algorithm
  • 3f3) Messier 22 + Interstellar Dust Image Stretching with the rnc-color-stretch Algorithm
  • 3f4) Advanced Image Stretching with High Light Pollution and Gradients with the rnc-color-stretch Algorithm
  • 4a) Astrophotography and Focal Length
  • 4b1) Astrophotography and Exposure
  • 4b2) Exposure Time, f/ratio, Aperture Area, Sensor Size, Quantum Efficiency: What Controls Light Collection? Plus Calibrating Your Camera
  • 4c) Aurora Photography
  • 4d) Meteor Photography
  • 4e) Do You Need a Modified Camera For Astrophotography?
  • 4f) How to Photograph the Sun: Sunrise, Sunset, Eclipses
  • 5) Nightscape Photography with a Barn Door Tracking Mount
  • 6a) Lighting and Protecting Your Night Vision
  • 6b) Color Vision at Night
  • 7a) Night and Low Light Photography with Digital Cameras (Technical)
  • 7b) On-Sensor Dark Current Suppression Technology
  • 7c) Technology advancements for low light long exposure imaging
  • 8a) Software for nightscape and astrophotographers

Introduction
The Test Data
Results
Real-World Image Example
Sampling in the Camera
Conclusions
References and Further Reading
Questions and Answers

If you find the information on this site useful, please support Clarkvision and make a donation (link below).

Introduction

Stacking is a term for adding/averaging multiple images together to reduce apparent noise (improve the signal-to-noise ratio) of the combined image. The signal-to-noise ratio, or S/N, increases by the square root of the number of images in the stack if there are no side effects. But in practice, there are side effects, and those side effects can result in visible problems in stacked images and limit the information that can be extracted. Which method(s) work best with the least artifacts?

There are basically 2 methods:

1) a form of add/average, including graded averages, sigma clipped average, min/max excluded average, etc.

2) median including clipped median, min/max exclusion, etc.

The effectiveness of stacking depends on the data that are fed to the stacking software. The raw output of most digital cameras is 14 bits per pixel. A raw converter may create the following:

a) linear 14-bit data from the DSLR, unscaled,
b) linear 16-bit data scaled by 4x from the 14-bit DSLR data, and
c) Tone curve 16-bit data.

To investigate the effectiveness of two different stacking methods, a standard deviation clipped average (called Sigma Clipped Average), and Median calculation, stacking was tested using different data types: linear 14-bit, linear 16-bit, and tone mapped 16-bit data.

The Test Data

To test the effectiveness of stacking, I constructed an image with embedded numbers (Figure 1). Then I scaled the numbers into an intensity sequence (Figure 2). Using scientific image processing software (Davinci from Arizona State University, http://davinci.asu.edu/), I computed images of random Gaussian noise. To the image in Figure 2, I added an offset, typical of Canon DSLRs to keep signal and noise from clipping at zero in the integer image files. I computed random noise at a level that made the S/N = 1.0 on the number 10 in the image. That means the number 1 has a S/N = 0.1 and 25 has S/N = 2.5 in a single image file. An example single image with noise is shown in Figure 3.

One hundred images like that in Figure 3, each with different random noise, were computed for stacking tests. The first set of 100 images simulates the output of a DSLR with 14-bits per pixel. A second set of 100 was computed as from a DSLR with linear 14-bits per pixel, then scaled by 4x to scale the 14-bits to 16 bits like that from some raw converters. A third set of 100 images was computed and scaled as if in a tone curve output from a raw converter. Because the test is for the faintest parts of an image, the tone curve scales the data by a factor of 10 but the response is still linear (see part 3b for more information on the tone curve function). The 16-bit data and tone curve data are still quantized at 14 bits but the scaling potentially adds precision to the stack, and we will see the effects of what the scaling has on the final output.

Stacking was performed in ImagesPlus selecting two methods: median and sigma clipped average. The clipping was set to 2.45 standard deviations. Clipping in a real image stack of the night sky removes most signatures from passing airplanes and satellites. In the data test here, there is essentially no difference in an simple average versus a clipped average.


Figure 1. The number sequence image.


Figure 2. The ramp sequence image.


Figure 3. Single frame with noise. The noise profile was designed to simulate the condition where sensor read noise + photon noise gives a S/N = 1 for the number 10.

Results

The results of the 14-bit linear output are shown for a median stack in Figure 4a and a Sigma Clipped Average stack in Figure 4b. The clipping was set to 2.45 standard deviations. Clearly, the Sigma Clipped Average produces a better result. The median is posterized to 14 bits, and it would be impossible to pull out faint signals But even the 14-bit linear data are posterized and limited information could be extracted (the numbers below 10).


Figure 4a. 100 image median combine on 14-bit data.


Figure 4b. 100 image Sigma-Clipped average combine on 14-bit data.

The results of the 16-bit linear output are shown for a median stack in Figure 5a and a Sigma Clipped Average stack in Figure 5b. The clipping was set to 2.45 standard deviations. Again, the Sigma Clipped Average produces a better result. The median is quantized to 14 bits. The median stack shows the numbers less than 10, but each number has a constant intensity level and the background has another constant intensity level. The Sigma Clipped Average produces a smoother result with decreasing intensities on the numbers, as expected, and a lower noise background. The Sigma Clipped Average also shows better separation of the numbers from the background. This means that fainter stars, nebulae and galaxies can be detected in an astrophoto.


Figure 5a. 100 image median combine on 16-bit data.


Figure 5b. 100 image Sigma-Clipped average combine on 16-bit data.

The results of the tone curve output are shown for a median stack in Figure 6a and a Sigma Clipped Average stack in Figure 6b. The clipping was set to 2.45 standard deviations. Again, the Sigma Clipped Average produces a better result. The median is quantized to 14 bits and appears similar to that in Figure 5a, but slightly less noisy. The median stack shows the numbers less than 10, but each number has a constant intensity level and the background has s slightly different constant intensity level. The Sigma Clipped Average produces a smoother result with decreasing intensities on the numbers and a lower noise background. The noise in the Sigma Clipped Average result (Figure 6b) is lower than that in the 16-bit linear stack (Figure 5b).


Figure 6a. 100 image median combine on tone-mapped data.


Figure 6b. 100 image Sigma-Clipped average combine on tone-mapped data.

In Figure 7, I compare the three methods for the median stack. Of these three, the tone curve results are the best, but all the median results are inferior to the Sigma Clipped Average results.




Figure 7.
Top: 100 image median combine on 14-bit data.
Middle: 100 image median combine on 16-bit data.
Bottom: 100 image median combine on tone-mapped data.

In Figure 8, I compare the three methods for the Sigma Clipped Average stack along with a full 32-bit floating point stack. Of the first three, the tone curve results are the best and visually indistinguishable from the 32-bit floating point results.





Figure 8.
Top: 100 image Sigma-Clipped average combine on 14-bit linear (input and output) data.
Upper Middle: 100 image Sigma-Clipped average combine on 16-bit linear data.
Lower Middle: 100 image Sigma-Clipped average combine on tone-mapped data.
Bottom: 100 image Sigma-Clipped average combine on 32-bit floating point output data (14-bit linear input).

Real-World Image Example


Figure 9.
a) Raw conversion to linear 14-bit unscaled tif (same levels as raw file). Calibration using flatfields, dark and bias frames in ImagesPlus. Alignment of frames and median combine in ImagesPlus, then a 32-bit floating point FITS file written. Significant stretching on the 32-bit floating point result in ImagesPlus. Refinement in photoshop on 16-bit data. (The 16-bit tiff data are here (1.4 MByte crop): try reducing the noise to panel c.
b) Raw conversion to scaled 16-bit linear tif in ImagesPlus. Calibration using flatfields, dark and bias frames in ImagesPlus. Alignment of frames and sigma-clipped average combine in ImagesPlus, then a 32-bit floating point FITS file written. The sigma clipping was set to 2.45 standard deviations. Significant stretching on the 32-bit floating point result in ImagesPlus. Refinement in photoshop on 16-bit data.
c) Raw conversion in Photoshop ACR using lens profiles as described here. Alignment of frames and sigma-clipped average combine in ImagesPlus, then a 16-bit tif file written. Stretching with curves in Photoshop.
d) Same as in (c) with some sharpening applied. In this case, mild Richardson-Lucy deconvolution in ImagesPlus, 7x7 Gaussian, 5 iterations and applied to the brighter regions. The lower noise of method (c) enables more sharpening and extraction of fainter and more subtle detail.
Note, the difference in color balance represents small differences in processing and should be ignored. What is important is the apparent noise, both luminance and color noise, apparent detail and subtle tonal gradations.

Sampling in the Camera

To detect the smallest signals in a stacked exposure set, sampling in the camera should be high enough to digitize those tiny signals. Figure 10 shows sampling effects by the camera A/D in a low noise situation (low camera noise).

It shows that to detect photons at the sub photon per exposure rate, there is an advantage in going higher than unity gain (higher ISO) (higher gain is smaller e/DN). (DN = data number in the image file). In night sky photography, as noise from sky glow increases, the benefits are reduced. One also sees from the diagram that there is not much gain in faint object detection by going from 0.3 to 0.2 e/DN (that would be a 50% increase in ISO), but a fair jump in faintest details digitized in going from unity gain to 1/3 e/DN. The image data are normalized so that the number 25 appears the same brightness.

In the 100 image stack, by digitizing at 0.3 e/DN, 20 photons are just detectable, or a rate of 1 photon per 5 images on average. At unity gain it is about 4 times worse. In real-world cameras with some banding present, the differences between gains will be larger. This implies over a stellar magnitude detection difference in astrophotography.


Figure 10. In camera sampling in a low noise situation shows that to detect low signals, about less than one photon per exposure, sampling by the A/D converter in the camera should be smaller than 1 electron per analog-to-digital converter unit (Data Number, DN). For cameras with around 5 to 6 micron pixels, that gain is typically around ISO 1600.

The model used to make the data in Figure 10 used a Poisson distribution for the photon noise, Gaussian distribution for the read + dark current noise, and +/- 1 DN error in the A/D conversion, then scaled the output images to the same level for comparison. The A/D error is important. For example, say you had 1 photon in a pixel in each exposure, the A/D will have some with no photons, and some with 2, thus increasing the error. Of course the other noise sources modulate that, but including the A/D conversion noise is important in showing the trend. As other noise sources increase, the A/D effect becomes less and it is less important to work at higher ISOs, unless one also needs to overcome banding problems (which is often the case).

The key to astrophotography in detecting faintest details are:

1) Darkest skies you can get to.

2) Largest aperture quality lens/telescope you can afford, with fastest f/ratio. Fast f/ratio and large aperture are key because you gather the most light in the shortest time.

Camera operation: ISO where a) ISO where banding in small enough to not be a factor, b) gain (ISO) about 1/3 e/DN. If banding is still an issue at ISO 1/3 e/DN, then go to higher ISO.

In choosing a digital camera for astrophotography:

1) Recent models from the last couple of years (these models, all manufacturers) have better dark current suppression and lower banding as well as good quantum efficiency.

3) Models with no raw lossy compression or raw star eating filtering and minimal raw filtering (at least 2, maybe more manufacturers are great in this area).

If read noise is on the order of about 2 or 3 electrons (or less) at ISO gain of 1/3 e/DN and low banding at that DN, having lower read noise will not show any difference in long exposure astrophotography where you can record some sky glow in each exposure. Only if you are trying to do astrophotography with couple of second exposures with narrow-band filters or very slow lenses/telescopes would read noise become an issue.

Where a camera becomes "ISOless" is largely irrelevant in astrophotography because ISOless is about low read noise and read noise gets swamped by other noise sources. Again, it is good to have reasonably low read noise (meaning 2 to 3 or so) at an ISO gain of about 1/3 e/DN. Sure it is fine to have lower, but it makes little difference when you expose sky to 1/4 to 1/3 histogram, then add in noise from dark current. These noise sources are what limit dynamic range and faintest subject detections, NOT read noise and whether or not you are at an ISOless level or unity gain.

Conclusions

Bit precision and method are important in stacking. The averaging methods are superior to median combine. As digital cameras get better, with lower system noise, including read noise and noise from dark current, and as more images are stacked, the output precision of the stack must be able to handle the increase in the signal-to-noise ratio. Simple 14-bit precision is inadequate, as is 14-bits scaled to 16-bits (only a 4x improvement in precision) when stacking more than about 10 images. When stacking large numbers of images, the increase in precision can only be met using averaging in 32-bit integer, 32-bit floating point or tone mapped if 16-bit image files are used.

If you are working with large numbers of images in a stack and want to work in a linear intensity response, then it is best to use an image editor that can work in 32-bit floating point, including storing the stacked image in a 32-bit integer or floating point format.

Tone mapping gives an approximately 40x improvement in intensity precision for the faintest parts of an image when working with 16-bit files compared to linear 14-bit digital camera raw output. Thus, tone mapping allows one to extract fainter details when working with 16-bits/channel image editors and 16-bits/channel format image files. If we assume a 1-bit pad in precision, the 40x scaling of the faintest tone mapped data would be good for 20x improvement in S/N, which means stacking up to 20 squared or 400 images should still work with adequate precision.

Of course, encountering these effects also depends on the noise levels in your imaging situation, including noise from the camera system and noise from airglow and light pollution. As noise increases, these side effects become hidden (literally lost in the noise).

If stacked images are done in linear 14 or 16-bit output, you may run into posterization, which shows as "splotchiness" and what I call a pasty look in images when stretched. I see a lot of these artifacts in online posted astrophotos, which indicates to me that posterization in the stacking is likely occurring.

The trade point of when a float is needed depends on the noise level (including read noise plus noise from dark current and noise from airglow and light pollution). One way to check this is to do some statistics on a single exposure in a dark area with no stars or nebulae. Let's say you do that and find a standard deviation of 6.5. If you then stack more than 6.5*6.5 = 42 frames, then the result will be integer quantized and extracting the faintest detail will be limited by integer math. Then it would be better to save the result of the stack as a 32-bit float format and stretch the results from the 32-bit float data. In ImagesPlus, you can keep the default format at 16-bit, and then at the end of the stack, save a copy of the image, select fits, then select 32-bit float format.

Questions and Answers

Why is the median stack image in Figure 4a flat with a single value for the background?

Answer. When the input data are quantized, the median combine is also quantized. Even if the input data are scaled to floating point values, they are still quantized (e.g. multiply by 1.23 and carried as floating point, there are still discrete values just now separated by 1.23). The median combine chooses one of those quantized values as the median. By stacking many images, when the noise in the stack is reduced to a fraction of the quantization step, the median gets quantized to a single value. Technically, this occurs when the standard deviation of the median is less than the quantization interval, and becomes significantly posterized when the standard deviation of the median is about 1/4 the quantization interval. How many images need to be combined for this to happen with real data depends on the quantization interval and system noise. But once posterization starts, adding more frames accelerates the collapse (Figure 11) and one can not extract a fainter signal from the data. Again, it illustrates that a median combine should be avoided when stacking image data.


Figure 11. Example quantization effects as more frames are median combined. The standard deviation of the median is 0.21 after 10 frames showing significant posterization. At 20 frames, quantization has reduced the standard deviation of the median to 0.02 resulting in a mostly uniform background. The sequence proceeds to collapse all values with a signal-to-noise ratio less than 1 in one frame to a single output value in the median stack.

The next question is, is the generated noise really Gaussian? Figure 12 shows linear and log plots of the noise profile (crosses) and Gaussian profile (line). The data are Gaussian within counting statistics (+/- 1). The online posts have charged the data are not Gaussian, but confuses Gaussian profile with sampling interval.

Figure 12. Statistics of the noise used in the models (crosses), compared to mathematical Gaussian profile (lines). Top: linear plot. Bottom: log plot. The one high point at x=1006 is round-off of finite counting statistics: the high value is 2 +/- 1.

On why the median combine in Figure 9a is noisier:
"Also, specifically for light, photons tend to arrive in bundles or waves, sometimes causing a disruption in the expected Poisson distribution. And that's something that should not be ignored. The result of these clumps of extra photons or periods of no photons is a possible disruption the median value, moving it higher or lower than it should have been. I think this not only leads to nosier result than an average, but also less quantization exactly *because* there is *more* noise. Again, your Figure 9 median combine with real data shows this. There is a nosier median than the average, as expected from statistical mathematics."

Answer. First, the input data (raw files) are exactly the same for all methods in Figure 9. If noise from photon clumping were a factor then we would see that the same pixels in the images made with other methods to also appear noisier. We do not. Second, see The Present Status of the Quantum Theory of Light edited by Jeffers, Roy, Vigier, and Hunter, Springer Science, 1997, page 25, where the clumping is described as being observed from astronomical objects on a few meter scale. At the speed of light, that applies to a few nanosecond scale, compared to the 4260 second exposure time of the Horsehead image in question. The photon clumping idea is off by some 12 orders of magnitude. The data in figure 9a is purely a consequence of median quantization.

On why noise is different between the images "Noise is worse than 25% because you did multiple operations, each adding its own error. Applying darks and flats also add noise. The 80% S/N is just for 1 simple median. Honestly, if you didn't realize that there should be extra error that then I don't know what to say."

Answer. All image combine stacks were performed in ImagesPlus which used 32-bit floating point. The input data from a DSLR of course are quantized and remain so after conversion to 32-bit floating point. The 32-bit floating point calculations ensure that there is no significant error added by the stacking operations. Also, the exact same master dark frame, flat field and bias frames were used for both the median and the average, so if the errors were due to these data, it would show in both the median and average stacks. Obviously it does not.

If you find the information on this site useful, please support Clarkvision and make a donation (link below).

References and Further Reading

The open source community is pretty active in the lens profile area. See:

Lensfun lens profiles: http://lensfun.sourceforge.net/ All users can supply data.

Tech Note: Combining Images Using Integer and Real Type Pixels. This site also shows median combine not producing results as good as mean methods: http://www.mirametrics.com/tech_note_intreal_combining.htm.

DN is "Data Number." That is the number in the file for each pixel. It is a number from 0 to 255 in an 8-bit image file, 0 to 65535 in a 16-bit unsigned integer tif file.

16-bit signed integer: -32768 to +32767

16-bit unsigned integer: 0 to 65535

Photoshop uses signed integers, but the 16-bit tiff is unsigned integer (correctly read by ImagesPlus).


Astrophotography Image Stacking – Astro Stacking

Hopefully you’ve been out shooting and applying what you’ve learned about astrophotography. For most there’s a fairly big learning curve with astrophotography. I was always pretty good with the computer, electronics, and the mechanical hardware, but learning to process the images was a huge challenge. Hopefully I can share what I’ve learned to help speed up your learning process.

There’s a lot to learn when it comes to taking the images from the camera to making a final image for display. You’ll find that 99% of the deep sky images that you shoot will require some form of post-processing. But before we even discuss doing any processing, let’s discuss how to best shoot the scene.

In the previous blogs, I’ve hinted about a technique that will let you get the most out of your astro images. Shooting very faint moving targets can be pretty challenging. It takes fairly decent equipment to get the really faint stuff, but beyond this, it’s important to properly photograph the subjects. There is one valuable technique that will help tremendously with processing and make the most of your data. This technique is stacking.

Let’s take a look at stacking in very basic terms. Shooting faint targets makes for generally noisy images. This is true for astrophotography as well as regular photography. This means that the photos look grainy and lack the silky smooth transition. In astrophotos, noise will disturb the transition from the target object to the dark regions. But if you shoot many photos of the same subject and stack them together, the result is far better than that of a single frame. The noise and graininess is filled in and the image will appear much smoother and complete. When I was going for the best quality images, I would generally shoot for between 10 and 20 hours of open shutter time. But again, these were for my very best deep sky images on professional level equipment. For me, that meant shooting over many nights and stacking all the data in the final image. I was shooting exposures that were ½ hour long,o I needed fewer frames. But the end result was a lot of data, that when assembled, resulted in very good data sets.

If you’re just starting out it’s not necessary for you to shoot this much. But generally the more you shoot the better. There’s a big difference that can be seen immediately in the final image. There is a point of diminishing returns, but most astrophotographers will never come close to this limit. So if you can start with shooting a couple hours you’ll end up with fairly decent data. But even shooting and stacking 10 images will be better than one single frame. The better the data, the easier it is to process into the final image.

How do we begin…? Once you have your mount aligned (see my previous blogs) the target framed and the lens or telescope focused, you can start shooting your images. Shoot the same subject, over and over. I generally use a computer or an intervalometer to take the work out of this. This allows me the ability to walk away and let the camera shoot until it’s done. Just be aware that you may need several batteries or an AC adapter for your camera. This is especially true in the cold. For your first outing, try to shoot for at least an hour of open shutter time. That means if you’re shooting 5 minute shots you’re going to want 12 of these to make an hour. It’s generally best to shoot with an exposure as long as possible, but not so long that the image becomes saturated with light fog or you begin to get star trails. I generally tried to shoot until I reached about 25-75% on the camera’s histogram. But this depends on the target and from where I’m shooting (and how much light pollution is present). Just keep in mind that 1 hour is not a magical number. Shoot more, if you have time and patience. This will make the post-processing after the stack easier and the final image even smoother.

Once you have the stack, what’s next? You need to process all these images into a single image. This is possible in Photoshop and there are some really great videos and information on the topic. So I’ll leave this learning process to those interested in doing the stacking in this manner.

The real benefit is doing the stacking in a program that is meant for processing astrophotos. There are many programs that are available to do this, some are even available for free. I used a program called MaximDL which is a high-end piece of professional astrophotography processing software. In addition to doing some processing, it also handles camera control, filter wheel control, focusing, guiding and many other aspects of shooting deep sky images. In a complex setup, it’s very beneficial to have control of everything in a single piece of software. However for those just starting out, look at getting Deep Sky Stacker (DSS). It is an excellent stacking program and is available at no cost. This allows you to practice shooting and processing images without investing a lot of additional money in software.

Be sure to take a look at the excellent instructions on the DSS website and online. It is fairly powerful and capable producing nice images. It will also allow the addition of calibration frames (discussed below), which is another very powerful feature for noise control. I generally found that I liked doing the stacking in DSS and then doing the remainder of the processing in Photoshop or similar image processing program. But that’s totally my preference. Each photographer should investigate the best workflow and combination of programs to use to produce the final image.

One really great feature of DSS is the comet stacking routine. Processing comets is even more complicated as the comet is typically in a different location in each frame. Some move slow enough not to have to worry about it. But others can move significant amounts in each frame. This typically takes some crafty processing to get a decent image. DSS takes a lot of the work out of it. This image was processed in DSS and Photoshop.

Coat hanger Asterism (CR399) and Comet Garradd

When beginning the stacking process, the images need to be quality sorted first and then aligned (or registered) first. The quality sorting can be done automatically in DSS, but I generally liked poking through the images and picking out the ones that were blurred from movement, or had clouds or planes. The registration or alignment will adjust the images up and down and also in rotation in order to bring all the frames in perfect alignment and then stack them together in one of several stacking methods. I generally prefer one of the median stacking methods.

Many of my astrophotos, including the comet photo above, were shot with professional level equipment. This equipment cost about half what my first house cost. To be fair I wanted to show what can be done with a DSR and Lens (or small telescope), so I re-processed some of my earliest images in DSS, knowing what I know now. These were shot with an Astro-modified, 6.3MP Canon 300D (Digital Rebel). This is one of the earliest DSLR’s. It was noisy and did not generally produce very clean astro images. But even with this old camera, the data was very usable and produced some fairly decent images. We’ll take a look at a few of these below:

My first modified DSLR for Astrophotography

Stacking Examples

Here are some examples of images right out of the camera and also some processed images. The first is a single frame that shows the Heart & Soul nebula (IC1805, IC1871, NGC 869 and NGC 884) as well as the double cluster. The top is out of the camera the next is after stacking and processing.

Unprocessed, right out of the camera

Stacked and post processed

The difference in these is drastic. In fairness, the single frame image was fogged by heavy light pollution. But this is a problem that will plague the majority of astrophotographers. The only way to combat this is to shoot from dark sites away from the city lights.

This next example is not as drastic. The top is out of the camera, the bottom is stacked and processed. Also included are crops of a single frame and stacked and processed images.

Single Frame and crop of the Rosette Nebula (NGC 2237)

Notice the missing details in the crop of this image.

Stack/post processed image and crop of the Rosette Nebula

The stacked image makes is much cleaner and much of the missing data has been filled in. Also note the better detail that is visible in the crop of the Rosette. This is the real benefit of the stacking method. One thing that you need to keep in mind with processing astrophotos is that it’s an incremental process. No single step is going to make a magical image from junk. Each step will add a tiny improvement, and with enough tiny steps you’ll end up with a very pleasing image. If you’re stacking many photos, most pieces of stacking software will take quite a while if you’re computer isn’t up to the task (like mine). So be patient and just let it run until it’s completed the registration and stacking processes.

Here’s another example of a single frame vs a stack. This one is of the Horsehead Nebula (B33) in Orion.

Single Frame and crop

Stack/post processed image and crop

It’s fairly easy to see the benefit of stacking when shooting astrophotos. One more advanced technique that will help reduce the noise in your stacks is called dithering. Basically this is moving the camera a couple pixels in a random direction after every frame. When using a median stacking method, objects in a different location on each frame will be eliminated. So using the stars as the alignment reference, the galaxies, nebulae or other subjects will remain in the same place. But hot pixels, satellites, planes, noise and other random effects will be in a different location, with respect to the stars, so these are eliminated when stacked. There are many guiding or tracking programs that will do dithering automatically. But even with a manual shutter release, it can help tremendously if you manually move the mount between exposures. It seems like a hassle, but dithering will add a fairly significant level of improvement. None of the images above (except the comet image) used dithering.

Another helpful addition is to add calibration frames. These will serve to help remove additional noise and other artifacts from the images. Dark frames will help remove hot pixels, Bias frames reduce read noise and flat frames will help clear up any dust spots or other specs that are caused by looking through the lens or telescope. There is a superb description of this in the FAQ section here. The newer more modern cameras tend to provide better noise and hot pixel control, so calibration might not be needed. But at the very least, flat frames should be used to ensure the removal of artifacts caused by a dirty lens or sensor. It will also help reduce any vignetting that occurs in the images. Remember: incremental improvements.

In the final installment of this Astrophotography series, we’ll discuss some of the details of going from a rough stacked image to the final image. This is where a lot of the magic happens so I hope you’ll stay tuned. In the meantime get out and shoot. See you soon.


Image Processing with CCDStack 2

By: The Editors of Sky & Telescope August 31, 2015 0

Get Articles like this sent to your inbox

Improve your deep-sky images with this innovative program.
By Bob Fera in the June 2013 issue of Sky & Telescope

Of the myriad programs available today for processing CCD images, CCDStack 2 is the choice of many seasoned imagers due to its intuitive user interface and various innovative “live” features. Veteran astrophotographer Bob Fera demonstrates his routine workflow using the software to calibrate, align, stack, and stretch his images to produce colorful portraits of celestial targets such as the deep photo above of the Cone Nebula.

As any experienced astrophotographer, and he or she will tell you that transforming a bunch of noisy sub-exposures into a colorful piece of art is no small feat. The process involves many steps using a variety of software packages, each with its own learning curve. For many imagers, the “art” happens in Adobe Photoshop. But before you can use a tool such as Photoshop to apply your personal touch to an image, your data must first go through a series of decidedly less sexy steps — calibration, alignment, and combination. And while these steps involve limited creative input, they are nonetheless critical to the final look of your picture.

Among the numerous programs for processing CCD data, I prefer CCDWare’s CCDStack 2 (www.ccdware.com) for PCs to calibrate, align, stack, and stretch my images into 16-bit TIFF files that are ready for the final tweaks in Photoshop. The program’s strength lies in its intuitive user interface, as well as some “live” stretching features. CCDStack 2 has worked well for me over the years and should provide you with a solid foundation for developing your own methods.

Image Calibration

Let’s begin by preparing our calibration files. I always record several dark, bias, and flat-field images and combine these into “master” calibration frames to ensure that my final result is as clean as possible. This reduces any spurious artifacts in my calibration frames due to cosmic ray hits or other unwanted signals.

Start by opening the program and select Process/Create Calibration Master/make master Bias. The program will immediately open the last folder you used in CCDStack 2, so you may need to navigate to your calibration files folder. Once there, select all the bias frames that match the temperature you shot of your light frames. The Combine Settings window then opens, and allows you a few different ways to combine your biases into a “master” bias frame. I prefer to use the sigma reject mean method, and change the sigma multiplier to 2, and an iterations limit of 2.

In a few moments, your master bias frame is displayed. Simply save the result as a 16-bit FITS file, and repeat the same process to combine your dark frames by selecting the Process/Create Calibration Master/make master Dark.

CCDStack 2 is the only image processing program for amateurs that displays exactly which pixels in each image it determines will be ignored when combining sub-exposures using sigma-based data-rejection algorithms. The red spots on the image above are flagged to be rejected in a stack of 10 images.
Bob Fera

Generating your master flat-field image is also similar, though the program will first ask you if you wish to dark/bias subtract each flat frame. If so, choose the master bias frame you’ve just created, and also the dark frame master that matches your flat-filed image. When you reach the Combine Method dialog, again choose sigma reject mean with a multiplier of 2 and an iteration value of 2. Make sure to repeat this routine for all your flats taken through various filters you shot through. Now that we have our calibration frames ready, let’s tackle our raw data.

Open all of your individual exposures taken through one of your filters (if you use a monochrome camera with color filters). Next, select the pull-down menu Process/Calibrate. The Calibration Manager window opens, which will automatically find your master dark, bias, and flat frames if they were saved to the same folder you were working in previously. If not, click the “Dark Manager” button and navigate to your master frames. Once all of your master frames are selected, simply click the “Apply to all” button at the bottom left and in a minute or so, all of your images in this group will be calibrated. Save each of these calibrated images by selecting File/Save data/ Included in the pull-down menu. A new window will open that allows you to add a suffix to your file title, to avoid overwriting your raw data. Select the 32-bit FIT float file option. Now you can repeat the same steps for each of your other filtered-image groups.

Now that all our images are calibrated, let’s align each frame. If you have plenty of RAM on your computer and a fast processor, you can open all your calibrated exposures and align them all at once. If you have limited memory, you can perform your alignment in groups, but remember to select one image to be the “base” image that all the others will be aligned to. Make sure your alignment frame is the image visible, then select the Stack/Register pull-down menu, and the Registration window opens. CCDStack 2 automatically detects multiple stars in your images, or allows you to select your own points to register if you so choose. Once you’ve selected the alignment points, click the “align all” button at the bottom left, and in a few moments, each of your sub-exposures should be aligned properly. Before applying the alignment permanently, pan through each of your images to make sure each one worked properly. If so, click the Apply tab at the top right. The program offers a few resampling options to compensate for the sub-pixel shifting of each frame. I prefer Bicubic B-spline, but you can experiment to see what works best for your images. After the alignment is applied, save the results with a new suffix.

After calibration, aligning your sub-exposures is easy in CCDStack 2. Simply open all the images to align, select Stack/Register, and the program automatically selects multiple stars to use as registration points. It also displays a type of “difference” between the base photo and each subsequent image that makes it easy to see when two images are out of register (left) and in alignment (right).
Bob Fera

Data Rejection

At this point we have all our images calibrated, aligned, and ready to stack. Combining your sub-frames properly will dramatically increase the signal-to-noise ratio of your final image, while eliminating unwanted airplane and satellite trails and other random artifacts. In CCDStack 2 this involves three steps: Normalize, Data Reject, and Combine.

Normalizing your data mathematically compensates for variations in sky background and transparency, scaling all of your open sub-exposures to similar brightness values for corresponding pixels. This step is necessary to produce the best stacked result.

First open all the images taken with a single filter and Select Stack/Normalize/Control/Both. A small window opens that asks you to identify the background sky area. Simply click your mouse and pull a tiny rectangular selection around an area that will be a “neutral” background sky with no bright nebulosity, galaxies, or stars in your selection. For images where nebulosity permeates the entire image, try to find a region with the faintest nebulosity, or a dark nebula, as your background selection. After you’ve made your selection and clicked OK, the program will then ask you to select a highlight area. This will most likely be your main subject, whether it’s a galaxy, nebula, star cluster, or comet. Make a selection around the brightest area and click OK. The Information window pops up and will display the calculated offset for each of your open images.

Next, we need to choose which method of data rejection to use. Data rejection identifies and removes undesirable artifacts in each of your individual images, replacing the offending areas in your final stacked result with the corresponding region from multiple unaffected sub-frames.

Choose Stack/Data Reject/Procedures and another new command window opens. Here we’ll select the data rejection algorithm from the pull-down list. I prefer to use the STD sigma reject, but you can experiment again to find what works best for your images. Check the “top image %” box, and set the value to 2, then click the “Apply to All” button. This can take a few moments, but when complete, the program will display all the rejected pixels in each of your sub-exposures as bright red. Now simply close the window and move on to the next step.

Now we’re ready to combine our images into the final stacks. Once again, the program offers a number of ways to do this. Refer to the internal help file to determine which suits your images best. I prefer mean combine, so I’ll select Stack/Combine/Mean from the top pull-down menu. The software will then compute the mean value for each pixel in the stack of sub-exposures, while excluding the rejected pixels. This will give you the maximum signal-to-noise ratio in your final image. When completed, save the resulting image (File/Save Data/This), and again choose 32-bit FITS integer files. Close all files (File/Remove all images), and repeat the same steps for all like-filtered files.

Bob Fera

Now we have master FITS files ready to combine into a color image. I prefer to process luminance images separately and then add them to the color result in Photoshop. Before combining any of the stacks, check them over carefully and address any gradients that may be affecting the individual stacks. CCDStack 2 has a gradient removal algorithm that can be found in the pull-down menu Process/Flatten Background, which requires you to click areas in your image until they appear evenly illuminated.

Stretching and Deconvolution

Now let’s stretch our luminance file using the Digital Development Process (DDP) feature. One of the software’s most important features is its ability to do a “live” DDP on the displayed version of your file. First open your master luminance image, and select Window/Adjust Display, opening a window that displays sliders to adjust the Background, Maximum, Gamma, and DDP levels of the displayed image. You can now simply adjust each of the sliders until you’re happy with the displayed result. The lower the DDP value is (when moving the slider to the left), the brighter the image becomes. I suggest keeping the image appearing slightly darker than how you’d like it to eventually look. This performs the bulk of the required stretching, but still leaves room for final tweaks in Photoshop. Once you get the image looking the way you want it, lower the Background value by around 50 points to avoid clipping the black level in your final image. Apply the display settings to your image with the pull-down option File/Save scaled data/This, and select TIFF 16 bit.

Among the program’s most innovative features is its ability to display “live” feedback when stretching an image to display both the faintest areas and brightest regions simultaneously. In the Adjust Display control window, simply move the DDP slider to the left and right to adjust your image, or change the background and maximum level display. None of these actions is applied permanently until your image is saved.
Bob Fera

You can also sharpen your image using deconvolution to tighten up the stars and sharpen small-scale features. CCDStack 2 has an excellent deconvolution routine called Positive Constraint that, when applied moderately, does a great job without introducing unwanted artifacts such as dark halos around stars. Select Process/Deconvolve. A new window opens, and a number of stars will appear with yellow + symbols over them. These are stars the program has selected to measure their point-spread function (PSF) to determine the strength of the deconvolution algorithm. You can also double-click on any stars you want the program to include in its calculations. Choose stars that are not saturated and are well defined (i.e. not embedded in nebulosity or within a visible galaxy). Next, select Positive Constraint at the bottom of the window, and set the number of iterations I often use 30 to 50. Now click the “Deconvolve” button, and in a few minutes the process is complete save the resulting FITS file. You can apply the same DDP settings to the deconvolved image as you did to the original by switching to the unprocessed version and clicking on “Apply to all” in the Display Manager window. Save the deconvolved version as a scaled 16-bit TIFF to be combined with the color image later in Photoshop.

Color Combine

Finally, let’s combine our red, green, and blue files into an RGB image. In order to accomplish this best, you first need to know the correct RGB ratios for your particular CCD camera, filters, and sky conditions when the images were recorded. Although there are several ways to measure these values once for your system, each data set also requires adjustments to be made for atmospheric extinction caused by the target’s altitude when each series of color sub-exposures was taken. I prefer the free software eXcalibrator (http://bf-astro.com/excalibrator/excalibrator.htm) for determining an accurate color balance (see www.skypub.com/excalibrator). However, a simple method to get you started with approximate color balance in CCDStack 2 is to normalize your red, green, and blue files to one another, and then combine the images at a 1:1:1 ratio. As described earlier, select a neutral background area, then the highlights. After normalization, select Color/Create from the pull-down menu. The Create Color Imagewindow opens, where you can assign your filtered images to their respective channels. You can also incorporate your master luminance image here if desired, though make sure not to include the stretched luminance image. Click the “Create” button, and in a moment your combined color image will appear.

Immediately a small window called Set Background appears with your color file. If your image requires additional color adjustment, simply drag a box around a neutral background area and click “OK.” You can perform additional background and highlight corrections using the Color/Adjust command in the pull-down menu.

Subtle color variations and wispy details in targets such as the reflection/emission nebulae NGC 1973, 1976, and 1977 are easy to preserve and enhance using the tools found in CCDStack 2.
Bob Fera

When you’re happy with the overall color image, you can stretch the result using the DDP slider and save the result for further adjustments in Photoshop, and include the stretched luminance image.

Performing these steps correctly provides a solid foundation upon which you can build and modify once you become familiar with all the tools available in CCDStack 2. Using the software’s sigma-based data-rejection algorithms, live DDP, and a mild application of Positive Constraint deconvolution will give you a head start on your way to producing images that may one day appear in Sky & Telescope.

Bob Fera shoots the night sky from his backyard observatory under the dark skies of Northern California.


How Image Stacking Works

Image stacking is a popular method of image processing amongst astrophotographers, although the exact same technique can be applied to any situation where identical images can be captured over a period of time, in other words in situations where the scene isn't changing due to motion or varying light and shadow. Astrophotography happens to be perfectly suited in this manner, in that astronomical objects are effectively static for reasonable durations of time. In the case of deep sky objects, the objects are virtually permanant. In the case of planetary images, they change slowly enough that a series of images spanning at least a few minutes can be acquired without observable motion.

The first time I witnessed the effects of image stacking, I was completely blown away by the result. It seems almost magical that so much real information can be gleaned from such horrible original images. But of course the real explanation is quite simple to understand.

Image stacking does two very different things at once. It increases the signal-to-noise ratio and increases the dynamic range. I will discuss each of these separately.

One point of confusion that should be resolved early on is whether there is a difference between averaging and summing. Since this remains an issue of contention I can only claim that my explanation makes sense. If one doesn't follow my explanation, then one might disagree with me. The short answer is that they are identical. It doesn't make any difference whether you stack into a sum or an average. This claim assumes that an average is represented using floating point values however. If you average into integer values then you have thrown away a lot of detailed information. More precisely, I maintain that there is a continous range of representations of a stack varying between a sum and an average, which simply consist of dividing the sum by any number between one and the number of images stacked. In this manner, it is obvious that summing and averaging are identical and contain the same fundamental information.

Now, in order to actually view a stack, the values must somehow be transformed into integer components of an image's brightness at each pixel. This isn't easier or harder to accomplish with a sum or a stack, as neither properly fits the necessary requirements of standard image representations. The sum contains values that are way off the top of the maximum possible value that can be represented, and the average contains floating point values which cannot be immediately interpretted as image pixels without conversion to integers first. The solution in both cases is the exact same mathematical operation. Simply find the necessary divisor to represent the brightest pixel in the stack without saturating, and then divide all pixels in the image by that divisor and convert the divided values to integers. Again, since the transformation is identical in both cases, clearly both forms contain the same information.

The only reason I harp on this so much is that it must be properly understood before one can really comprehend what stacking is doing, which is actually extremely simple once you get down to it.

The classic application of image stacking is to increase the signal-to-noise ratio (snr). This sounds technical and confusing at first, but it is really simple to understand. Let's look at it in parts and then see how the whole thing works.

The first thing you must realize is that this is a pixel-by-pixel operation. Each pixel is operated on completely independent of all other other pixels. For this reason, the simplest way to understand what is going on is to imagine that your image is only a single pixel wide and tall. I realize this is strange, but bear with me. So your image is a single pixel. What is that pixel in each of your raw frames? It is the "signal", real photons that entered the telescope and accumulated in the CCD sensor of the camera, plus the thermal noise of the CCD and the bias along with any flatfield effects. plus some random noise thrown in for good measure. It is this last element of noise that we are concerned with. The other factors can be best handled through operations such as darkframe subtraction and flatfield division. However, it is obvious that after performing such operations to a raw, we still don't have a beautiful image, at least compared to what can be produced by stacking. Why is this?

The problem is that last element of random noise. Imagine the following experiment: pick random numbers (positive and negative) from a Gaussian distribution centered at zero. Because the distribution is Gaussian, the most likely value is exactly zero, but on each trial (one number picked), you will virtually never get an actual zero. However, what happens if you take a whole lot of random numbers and average them. Clearly, the average of your numbers approaches zero more and more closely, the more numbers you pick, right? This occurs for two reasons. First, since the Gaussian is symmetrical and centered at zero, you have a one in two changes of picking a positive or negative number on each trial. On top of that, you have a greater chance of picking numbers with a low absolute value due to the shape of the Gaussian. When combined, these two reasons demonstrate clearly that the average of a series of randomly chosen numbers (from this distribution) will converge assymptotically toward zero (without every truly reaching zero of course).

Now imagine that this Gaussian distribution of random numbers represents noise in your pixel sample. If you are also gathering real light at the same time as the noise, then the center of the Gaussian won't be zero. It will be the true value of the object you are imaging. In other words, the value you record with the CCD in a single image equals the true desired value plus some random Gaussian-chosen value, which might make the recorded value less than the true value or might make it greater than this value.

. but we just established that repeated samples of the noise approach zero. So what stacking really does is repeatedly sample the value in question. The real true value never actually changes, in that the number of photons arriving from the object is relatively constant from one image to the next. Meanwhile, the noise component converges on zero, which allows the stacked value to approach the true value over a series of stacked samples.

That's it as far as the snr issue is concerned. It's pretty simple isn't it.

Another task that stacking accomplishes, which is not toted too much in the literature but which is of great importance to deep sky astrophotographers, is increase the dynamic range of the image. Of course this can only be understood if you already understand what dynamic range is in the first place. Simply put, dynamic range represents the difference between the brightest possible recordable value and the dimmest possible recorded value. Values greater than the brightest possible value saturate (and are therefore ceilinged as the brightest possible recordable value instead of their actual value), while values dimmer than the dimmest possible value simply drop off the bottom and are recorded as 0.

First understand how this works in a single raw frame captured with a CCD sensor. CCDs have an inherant sensitivity. Light that is too dim for their sensitivity simply isn't recorded at all. This is the lower bound, the dimmest possible value that can be recorded. The simplest solution to this problem is to exposure for a longer period of time, to get the light value above the dimmest recordable value so it will in fact be recorded.

However, as the exposure time is increased, the value of the brightest parts of an image increases along with the value of the dimmest parts of the image. At the point where parts of the image saturate, and are recorded as the brightest possible value instead of their true (brighter) value, the recording is overloaded and crucial information is lost.

Now you can understand what dynamic range means in a CCD sensor and a single image. Certain objects will have a range of brightness that exceeds the range of brightness that can be recorded by the CCD. The range of brightness of the object is its actual dynamic range, while the range of recordable brightness in the CCD is the CCD's recordable dynamic range.

The following illustration shows the concepts described above. Notice that there is no one perfect exposure time for an object. It depends on whether you are willing to lose the dim parts to prevent saturation of the bright parts or whether you are willing to saturate the bright parts to get the dim parts. Stacking only aids this problem to a limited degree, as described below. Once the limits of stacking have been reached in this regard more complicated approaches must be used, such as mosaicing, in which a short exposure stack is blended with a long exposure stack, such that each stack only contributes the areas of the image in which it has useful information.

CCDs are analog devices (or digital at the scale of photons in the CCD wells and electrons in the wires sending electrical signals from the CCD to the computer). However, analog devices send their signals through analog/digital converters (A/D converters) before sending the digial information to the computer. This is convenient for computers, but it introduces an arbitrary point of dynamic range constraint into the imaging device that theoretically doesn't need to be there. An analog device would theoretically have great dynamic range, but suffers from serious noise problems (this is why digital long distance and cellular phones sound better than analog ones). The question is, how does the A/D converter affect the dynamic range, or in other words, since all we care about is the end product, what exactly is the dyamic range of the image coming out of the A/D converter. The answer is that different cameras produce different numbers of digital bits. Webcams usually produce 8 bits while professional cameras usually produce twelve to sixteen bits.

This means that professional cameras have sixteen to 256 times more digitized values with which to represent brightnesses compared to a webcam, which means that as you crank up the exposure time to get the dim parts of an object within the recordable range, you have more room left at the top of your range to accomodate the brightest parts of the object before they saturate.

So what does stacking do? The short answer is that it increases the number of possible digitized values linearly with the number of images stacked. So you take a bunch of images that are carefully exposed so as not to saturate the brightest parts. This means you honestly risk losing the dimmest parts. However, when you perform the stack, the dimmest parts accumulate into higher values that escape the floor of the dynamic range, while simultaneously increasing the dynamic range as the brightest parts get brighter and brighter as more images are added to the stack. It is as if the max possible brightest value keeps increasing just enough to stay ahead of the increasing brightness of the stacked values of the brightest pixels, if that makes sense.

In this way, the stacked image contains both dim and bright parts of an image without losing the dim parts off the bottom or the bright parts off the top.

Now, it should be immediatel obvious that there is something slightly wrong here. If the raw frames were exposed with a short enough time period to not gather the dim parts at all, because the dim parts were floored to zero, then how were they accumulated in the stack? In truth, if the value in a particular raw falls to zero, it will contribute nothing to the stack. However, imagine that the true value of a dim pixel is somewhere between zero and one. The digitization of the A/D converter will turn that value into a zero, right? Not necessarily. Remember, there is noise to contend with. The noise is helpful here, in that the recorded value of such a pixel will sometimes be zero and sometimes be one, and occasionally even two or three. This is true of a truly black pixel with no real light of course, but in the case of a dim pixel, the average of the Gaussian will be between zero and one, not actually zero. When you stack a series of samples of this pixel, some signal will actually accumulate, and the value will bump up above the floor value of the stacked image, which is simply one of course.

Interestingly, it is easy to tell which parts of an image have a true value that is lower than one in a each raw frame. If the summed value of a pixel is less than the number of images in the stack, or if the average value of the pixel is a floating point value below one, then clearly the true value must be below one in the raw frames because some of the raw frames must have contributed a zero to the stack in order for the stacked value to be less than the number of images stacked after a sum is produced. (This does not take into account that there is of course some noise at play here as well, which means a pixel with a true value of 1.5 might get a zero from some raw frames, but the stacked value should, in theory, be greater than one in the averaged stack of course).

There is another factor at play here too. The Gaussian distribution is about the same shape (variance or standard deviation) regardless of the brightess of the actual pixel, which means the noise component of a pixel is much more severe for dim pixels than for bright pixels. Therefore, stacking allows you to bright the value of dim pixels up into a range where they wont be drowned out by the noise. while at the same time decreasing the noise anyway, as per the description in the first half of this article. This is another crucial aspect of how stacking allows dim parts of an image to become apparent. It is for this same reason that, in each raw frame, the bright parts, although noisy, are quite discernable in their basic structure, while the dim parts can appear virtually incomprehensible.


Deep Sky Stacking Programs for Digital SLR Cameras

A common approach to astrophotography has become the use of Digital SLR cameras (DSLR). These are relatively cheap, can be used for astronomy and ordinary terrestrial photography, and produce surprisingly good astronomy images so have become quite popular.

There’s a few basic steps required for getting started in DSLR astrophotography. I would summarise them as:
1. Buy a camera
2. Buy a tripod, telescope or other tracking platform
3. Acquire a piece of software to help take long exposure photographs
4. Acquire a piece of software to process (including stack) the photographs you take.

The question often arises from the above of what piece of software to use for stacking and processing the resulting images that you take using your camera. Or, also often the case, people don’t realise that there is software available to make this easy. So here I am going to list a few options, hopefully making it easier for anyone who finds this page.

If you know of programs which are suitable for DSLR astrophotography image processing that are not on this list please let me know, also let me know if information here needs updating. Thank you.

Software suitable for stacking and/or processing astrophotography DSLR images:

Deep Sky Stacker

This is a free and very capable piece of software for aligning, combining and performing post processing of astrophotographs from digital SLR cameras. The best thing about this software is that it’s free, and amazingly capable for something that is free.

This software will read a wide variety of file formats including Canon RAW format, and process them. I have had some issues with processing canon RAW files with respect to getting good colour balance post-stacking so often choose to first convert the RAW files to TIF before processing. This may simply be a lack of experience on my part, as I do not use this software often.

The registering capabilities of Deep Sky Stacker are very good but do not match the capabilities of RegiStar or PixInsight when it comes to getting a good alignment of frames. There are often cases I find DSS will not correctly align frames where as RegiStar and PixInsight will.

I don’t tend to like the post-processing capabilities of Deep Sky Stacker so tend to finish my use of DSS at the point it has stacked the “Autosave.tif” and take that file in to PhotoShop from there to perform post-processing.

Deep Sky Stacker’s biggest advantage is probably it’s ease of use (very intuitive and easy to use interface) and it’s flexibility with it supporting all major file formats and handling various scenarios covering most astrophotography needs.

Starry Landscape Stacker

This is an Apple/Mac program and a great option for those who do not use Windows. It is effectively a good alternative to Deep Sky Stacker for those who use Apple PC’s.

PixInsight is an advanced astrophotography image processing piece of software. I now have some experience using PixInsight for processing CCD images from an SBIG ST8-XME camera and RAW CR2 files from a Canon 6D DSLR and can certainly see the potential of the software.

If you ant a one-stop-shop for astrophotography image processing and you are happy to spend the $250 on PixInsight, there’s a very good chance you need none of the other pieces of software listed on this page. Having said that, you will be up for a steep learning curve.

PixInsight operates in a very different way to other software. They even seem to put buttons on dialogue boxes around the opposite way to what is most common just to confuse the user. The difference in how processing is done and the user interface in PixInsight makes the learning curve very steep and troubling at first. There are video tutorials online which are almost essential for getting an understanding of how to use the software before you lose your hair trying, but once concerned it is proving to be very powerful. It took me a few attempts coming back to PixInsight over a few months before I became familiar enough with it and stopped hitting brick walls to be able to process FIT and DSLR images with some confidence.

Functions such as applying a LinearFit across LRGB frames, and the Dynamic Background Extraction function on any image to flatten image backgrounds are particularly useful and relatively easy to use once you understand the basics of the PixInsight user interface.

Where other processing software has failed to produce a good result of DSLR images (software such as using DSS, RegiStar and Photoshop) PixInsight has excelled and brought out more detail in images than I realised existed in the raw data.

There is no doubt to my knowledge that PixInsight is the most advanced piece of software for stacking astrophotography deep sky images. It’s set of processes and plugins is both extensive and powerful. The catch is only in it’s usability and how patient you must be to work through its steep learning curve to achieve good results.

I would suggest if you are going to use PixInsight, start with DSS and understand the basics of astrophotography image processing before you begin the daunting process of understanding how to use PixInsight. Also, if you have easy to align good quality images then you will likely get a very good result from DSS in a much quicker time frame than PixInsight which will require you to perform more steps.

If you want to process DSLR images with PixInsight you will need a beefy machine to run it on. It will easily consume all of my 16 gigabytes of RAM on my Core i7 64bit windows machine when processing a stack of 20 DSLR images. Programs such as RegiStar work in a significantly smaller footprint.

PixInsight is available as 45 day free trial.

StarStaX is a multi-platform image stacking software. From their website: https://www.markus-enzweiler.de/StarStaX/StarStaX.html

StarStaX is a fast multi-platform image stacking and blending software, which allows to merge a series of photos into a single image using different blending modes. It is developed primarily for

Star Trail Photography where the relative motion of the stars in consecutive images creates structures looking like star trails. Besides star trails, it can be of great use in more general image blending tasks, such as noise reduction or synthetic exposure enlargement.

StarStaX has advanced features such as interactive gap-filling and can create an image sequence of the blending process which can easily be converted into great looking time-lapse videos.

StarStaX is currently under development. The current version 0.70 was released on December 16, 2014. StarStaX is available as a free download for Mac OS X, Windows and Linux.

Find StarStaX here: https://www.markus-enzweiler.de/StarStaX/StarStaX.html

CCDStack is one of a suit of products made by CCDWare aligned to advanced usage of telescopes.

I have used CCDStack a reasonable amount now for processing images from my ST8-XME astronomy camera and find it very usable and relatively powerful. I like features such as being able to see what data is being rejected by a sigma function on light frames and doing this very quickly and easily compared to PixInsight which shows you no preview before processing the full stack. This makes it very easy to tweak stacking parameters for a good result and apply different filtering to individual frames (such as when a satellite passes through a frame, applying harsher exclusion to that frame).

CCDStack will easily in only a handful of steps register your frames, normalise (apply weighting to) frames, apply data rejection to frames and combine frames in to a stack using weighting determined by the normalisation.

I found CCDStack to be a good and logical step up from CCDSoft. It is usable and has intuitive and useful functionality. The program seems relatively light weight also, working efficiently with a large number of files.

I have not tried CCDStack for DSLR images. It does apparently open CR2 RAW files (amongst other formats) however in my quick attempt it did not open CR2 files from my Canon 6D (I’m unsure why).

Astro Pixel Processor

Astro Pixel Processor is a complete image processing software package: https://www.astropixelprocessor.com/

TBA on details – I’m still testing this one!

I primarily use MaximDL for image reduction, as it’s image reduction process is very painless. Provide it with a directory of all your reduction .FIT files and it will nicely sort them in to a database of reduction groups to be applied to any image you open. Open the .FIT needing to be calibrated/reduced and it will apply the appropriate reduction frames without you choosing reduction files of the correct temperature, binning, etc. This is significantly easier than any of the other packages which all require you to do more manual work with reduction frames. The benefits of MaximDL’s reduction frame handling for .FIT files may or may not be transferred to use of DSLR raw files – I have not tried reduction of DSLR images in Maxim.

MaximDL’ stacking seems fair however I haven’t had need to use it for alignment and stacking. I also haven’t tried MaximDL for large images such as DSLR, with the largest I typically use in Maxim being those from my SBIG ST8-XME.

This is a fantastic piece of software for aligning and combining individual astrophotographs from digital SLR cameras. It works very efficiently with large files, is amazingly capable in aligning photographs and has quite good stacking algorithms built in as a bonus.

This software is primarily intended for simply the registering (aligning) of frames such that they can be combined. This piece of software is so good that you can combine old film images with new digital images, or digital images from different cameras with different focal lengths and all sorts. It will also easily handles field rotation (fixed tripod shots are OK) and pretty much any other distortion.

The problems I have with this software is that it does not read Canon RAW files, so conversion to some other format such as TIF is required first, that it does not handle reduction of the images which leaves you needing another piece of software (like PhotoShop) to do that manually first, and that when combining frames in to a stack it does not provide any weighting of frames or sigma exclusion of noise in frames leaving this piece of software primarily useful for registering frames and saving those registered frames, not stacking them.

RegiStar’s excellence at registering frames comes with a price, and in this case that’s about US$159.

The version of RegiStar that I am familiar with is 1.0, and it hasn’t been updated for some time (2004). This means it’s not up to date with current file types (RAW) but doesn’t detract from it’s excellent ability to align TIF images. Increasingly, as time ticks on and no further updates are published, you would be wise considering an alternative piece of software which is updated more regularly, such as PixInsight.

I cannot say much about ImagePlus as I have not used it for DSLR image processing. However many people do and it comes highly recommended. You can find out plenty of information about it around the web.


What is stacking in astrophotography?

Unwanted noise in a typical image tends to be random across different exposures whereas the desired signal is consistent.

When a set of images is stacked, the individual image values are averaged, which means that the random noise overall diminishes but the signal remains constant.

This means that the ratio of the signal to the noise increases, resulting in a much cleaner, more detailed image with a smoother background.

What is dynamic range?

As well as striving to increase the SNR of their images, astro imagers also aim for a wide dynamic range.

Dynamic range is the spread of brightness levels from the dimmest recorded light value that can be captured to just before pixels become saturated.

Objects with a wide dynamic range include the Andromeda Galaxy and the Orion Nebula, with their intensely bright cores and much fainter outer regions.

A single image of these could easily reach saturation on the brightest areas before the dimmer details have registered at all.

But when you stack several unsaturated images together, the dimmer values accumulate into higher values, bringing fainter objects over the bottom limit of the dynamic range (in other words, you can start to see them), while at the same time the brighter values increase as well.

Stacked images, therefore, display a wider dynamic range.

To take advantage of this seemingly win-win process, a few additional steps need to be carried out.

Noise isn’t limited to the quality of the signal received by the sensor.

There are unwanted signals generated by the camera’s sensor itself thermal noise as the sensor warms up during long exposures variations in pixel-to-pixel sensitivity shadows caused by dust particles and vignetting of the light cone.

This additional degradation of the image is tackled by a process called calibration, which involves capturing extra one-off frames that are included in the stacking process to ‘subtract’ noise.

A useful piece of jargon to know at this point is that all the individual shots of your target image are called light frames when it comes to the calibration process.

Once the images have been calibrated, they need to be aligned with one another before the stacking.

The calibration, alignment and final stacking processes can be easily carried out using specialist astronomy-based image processing software.

DeepSkyStacker is an excellent free program but other commercial image processors like Astroart, Astro Pixel Processor, MaxIm DL, Nebulosity and PixInsight are worth considering.

As ever, your best bet is to start small, experimenting with a few frames on easy objects, and work up from there. Because mastering stacking is a key skill when it comes to truly awesome deep-sky images.


Now that you have an overall idea of what the visualization process involves, you will learn about the technical details of each stage of this process.

Visualization Stages - Technical Notes

Stage 1: The stretch:

Software comments
IDL This costs money and you have to do your own coding.
RGBSUN in IRAF Requires trial and error for the thresholds and you can only combine 3 filters.
kvis Free in the Karma suite. http://www.atnf.csiro.au/computing/software/karma/. This let's you select thresholds in real time via histograms. As well as linear and log scaling, does square root which is good for nebulae. It has additional algorithms in its pseudo-colour option (best is greyscale3). Also it exports your scaled image to Portable Pixel Map format which is accepted by all many packages.

In Karma, using kvis, you can reduce an image by loading a fits file with the filter option turned on. Select the number of pixels to "skip" (which is actually "add together"). Adjust your image and then export this to a new ppm image. If you are using another package for stretching the intensities, then a good format to save the file as is tiff format, if it is available.

Original Image Stretched Image

Stage 2: Layering and Assigning Colours:

  • Layers allows one to assign any colour, not just a primary colour, to an image. (See an example at http://www.ras.ucalgary.ca/CGPS/press/shell/)
  • In order to see all the images in your stack of layers, "screen" is a good algorithm. This mode is set, using a menu on the Layer Dialog Box, for each individual layer. A layer with this mode is like a transparency or positive slide, allowing light from the image below it to shine through it.
  • Each layer is individually adjusted so a noisey image, say from one filter's dataset, can be suppressed without affecting the other images. To suppress noise you can apply a gaussian filter to the pixels, smoothing them out.
  • Click your right mouse button in the image window in order to access the task menus.
  1. In Gimp open up an image.
  2. Then open up a new image with the background set to black by selecting the foreground or background as appropriate. Ensure the Layers, Channels, Paths dialog box is open.
  3. On your original image, right clicking to get the menu options, find Edit --> Copy visible. On the new image, Edit --> Paste. This will put the B&W image into a layer.
  4. In the Layers dialog box, click on the words "floating section" and give the layer a name. This will also change layer into something you can edit.
  5. Set the mode algorithm, in the Layers, Channels, Paths dialog box. IMPORTANT: in order to see each of the layers, not just the top one, you need to select a relevant mode. "screen" is a good algorithm for combining images it is set on each of the layers (or you won't see the one underneath).
  6. Repeat for other images.
  7. Save this layered image as an ".xcf" format file.
  1. Click on the name of a layer in the Layers, Channels, Paths dialog box so that it is blue which means active.
  2. Go get the levels tool. [Image --> Colors --> Levels tool]
  3. Change the top menu in the levels tool from value to a colour and adjust the OUTPUT levels to get your requested colour. (For example, if you want your layer to be green, then change the menu to Red and drag the right slider in the output levels to zero. repeat this for Blue. your layer should now be Green).
  4. Repeat for each layer.
  5. Save this as a .xcf format file.
  6. Adjust values and colours until you are happy with the results. For example, if one filter image is particularly noisy (textured), this can be reduced by applying to that layer a gaussian filter with the width of the noise (e.g. a few pixels).
  7. Save changes as a .xcf format file.

Some advise: Make copies of your B&W layers and work on those so you don't have to insert images again. Sometimes turn off the other layers (by clicking on the eye icon) to check your colours.

This is very iterative. Enjoy!

Other options: Some imagemakers work on one RGB image (i.e. 3 filters) and then layer in other filters at this stage. For an example, at http://heritage.stsci.edu/public/apr1/h301filt.html is an RGB image from 3 filters with other filters layered on top.

Filter Black and White Stretch Image Colour Assigned to Image
Ultraviolet
Blue
Visual
Infrared

Stage 3: Combine the Layers

After you are satisfied in general with your colour selection, and have saved it as an .xcf file, then you flatten the image, using the Layers dialog box options, into a single tiff file with a different name in tif format.

Even better, open a new image (with a black background), Edit --> Copy Visible the display of your .xcf file and then Edit --> Paste into the new image. Set mode to screen and flatten the new image and save as a single tiff file. To flatten you use the submenus under Layers. Layers --> Merge Visible and then Layers --> Flatten.

Stage 4: Remove the Cosmetic Defects.

Use the image manipulation tool options (like levels) for final colour and contrast adjustments. Use the clone tool to remove chip seams and cosmic rays. Chose your orientation.

Save this file as a tiff (no compression) or, in a pinch, a 100% quality jpeg.


Case study 2 - M63 image processing in MIPAV

M63 also known as Sunflower Galaxy was discovered by Pierre Méchain on June 14, 1779. The galaxy (at that time called a nebula) was then listed by Charles Messier as object 63 in his catalog. Later in 19th century William Parsons, 3rd Earl of Rosse identified spiral structures within the galaxy, thus making Sunflower galaxy one of the first galaxies with spiral structure identified. In 1971, a supernova with a magnitude of 11.8 appeared in one of the arms of M63.

Image processing log for M63

For M63 data reduction I used the same set of 11 steps which I used for M51.

Noise reduction

For a given dataset I applied the following noise reduction algorithms: 1. First I applied Unsharp mask with the following parameters was applied to all M63 science images R,V and B: Scale of Gaussian - x dim - 0.5, y-dim - 0.5, weight of blurred image =0.75. 2. Then I used Nonlinear Noise reduction filter and applied it to all science images which come from the previous step using the following parameters:

  • For R images - brightness_threshold less than 0.172237074375152 (image min) and Gaussian_std_dev float 0.5
  • For V images - brightness_threshold double less than 0.4902107357978821 (image min) and Gaussian_std_dev float 0.5
  • For B images - brightness_threshold double less than 0.13539797365665437 (image min) and Gaussian_std_dev float 0.5.

I used Nonlinear Noise Reduction because this algorithm uses nonlinear filtering to reduce noise in an image while preserving both the underlying structure and the edges and corners. I found it working really good for preserving the beautiful spiral structure of M63. Read more .