Added ZWO settings dialog?

I totally agree on that and it applies to CMOS sensors too. They are both linear devices, so their output is directly proportional to light flux, which in turn depends on the filter’s bandwidth and the relative abundance of hydrogen, oxygen and sulfur in the nebula (typically Ha >> OIII, SII).

There is a universally accepted definition of dynamic range in the scientific community, which is the one @jon.rista reported above: DR = FWC/RN.

That’s not likely to happen, because there are so many parameters that are involved in an exposure: the surface magnitude of your target (or better, the monochrome flux for each wavelength that will be part of your picture), the surface magnitude of your sky (light pollution), the bandwidth of any filter you might use, the diameter and focal length of your scope and the optical efficiency, the size of the sensor and its pixels, the quantum efficiency of the sensor, the exposure time. Those are the physical quantities that are required to evaluate how many electrons you will get per pixel. Most of them may not be evaluated by the manufacturers, as they depend on your specific configuration).

On the other hand, as per my previous post, chances are that you’ll only need one or two different gains, depending on the fact that you’re shooting wideband and/or narrowband (you don’t need different gains for different type of objects, i.e. galaxies vs nebulae). Then you’re simply going to set the appropriate exposure time for your target and for your sky. Offset can be established once and for all (for each gain value you will use) by simply choosing the lowest value that keeps the histogram detached from the left hand side (no pixels clipped to zero). The good news is that some manufacturer will provide an automatic offset setting.

I believe you’ll have an easier time with a CMOS astrocamera if you think of it like a DSLR. On a DSLR you just choose an ISO setting and then an exposure time. With a CMOS camera you’re doing the same: choose a gain that fits your purposes and then the appropriate exposure time. Regarding the gain, you don’t need to be very precise. I mean that, with a camera offering a gain from 0 to 560, there will be no practical difference if you choose 60 instead of 59.

If you read my whole post, you will see that I said the increase in dynamic range could also be termed an increase in effective bit depth or an increase in tonal range.

Regardless, as you stack, averaging of the signal in each frame results in a reduction of noise in the final output. The official definition of dynamic range is FWC/RN. That would apply directly to the hardware DR of a camera. Dynamic range refers to the number of discrete tones that can be discerned in the data. Due to noise, a change of some specific amount, the dynamic range “step”, is necessary for one tone to be discretely discerned from another.

For an integrated stack, this concept can be extended. Read noise, along with all other forms of noise, is reduced as you stack (with an averaging model). The maximum number of discrete tones possible in an integrated image would be FWC/(RN/SQRT(SubCount)). Again, this is a number of discretely discernible steps. This would be the maximum. Since an image contains other information and thus other noise as well, plus additional offsets, a more accurate and representative formula of the discrete number of steps discernible in an integrated stack of subs would be:

(FWC - BiasOffset - DarkOffset - SkyOffset)/(SQRT(ReadNoise^2 + SkyNoise^2 + DarkNoise^2)/SQRT(SubCount))

Determining the number of discrete steps of useful information in an image need not be restricted to just hardware. It would apply to any image signal. Stacking increases SNR, yes, however the SNR is not quite the same as the dynamic range…or, if you prefer a different term, tonal range. Tonal range would describe the number of steps of information from the noise floor (subtracting any offsets) through the maximum signal, whereas SNR would be relative to any given measured signal peak (which could be very low for a background signal area, moderate for an object signal area, or very high for a star signal area). SNR is dynamic and different for each pixel of the image, whereas the tonal range would be consistent for the entire image.

As for bit depth. If you stack your data using high precision accumulators (i.e. 32-bit or 64-bit float), then you can most assuredly increase the effective bit depth as well. If you stack with accumulators of the same bit depth as your camera, then no…you wouldn’t gain anything. In fact, stacking would be largely useless if you stacked with low precision accumulators.

These days, pretty much all stacking is done with high precision accumulators, which means that as you stack, your effective bit depth follows the formula above. You can convert the above steps to effective bit depth with:

EffectiveBits = log2((FWC - BiasOffset - DarkOffset - SkyOffset)/(SQRT(ReadNoise^2 + SkyNoise^2 + DarkNoise^2)/SQRT(SubCount)))

Not sure I get all that you are saying. If you are saying you can stack and increase the bit depth beyond that of the camera, that is not physically possible. You can’t create information where none exists. If the camera is 12 bits, there is no way to create the 13th bit because that information does not exist in the original data no mater how many you stack.

For each bit in the camera the signal is T + Ni where T is the true value and Ni is some random noise. When you stack n images you get n * (T +Ni) / n which equals T + Ni / n. With stacking the last term trends to zero so you are left with the original true (noiseless) value. That is, stacking does not increase the physical bit depth beyond what the camera provides. Stacking does reduce the noise and increase the signal to noise but you can’t get more resolution than the camera started with unless you do something like HDR. You can use high resolution accumulators but any bits beyond that of the camera are false information (artifacts of the math). If the camera is 12 bits, those 12 bits only represent those values - they contain no information that would allow you to determine more bits no mater how many you combine. In base 10, suppose I gave you the values 2,5,7. Ok lets stack them by averaging. The result is 4.66666. Your premise is that the .6666 has added to the information content of the data but that is false - it is an artifact of the math. The math operations produced extra digits but not valid information.

https://www.researchgate.net/publication/258813969_Temporal_image_stacking_for_noise_reduction_and_dynamic_range_improvement

and more readable:

http://keithwiley.com/astroPhotography/imageStacking.shtml

etc.

You can do precisely that. Consider a dice that has a very slight imbalance. There are only six states. Throwing it once you probably get a 1/6 chance of getting any particular number. Throwing it a thousand times and averaging the results, you will reveal the slight anomaly.
I have never tried it but if you take a 32-bit FITS image, stacked from 30 16-bit downloads from a CMOS camera and create a 12-bit copy (the depth of the ADC). Now stretch both similarly… try it and smile :slight_smile:

Noise ironically can improve resolution. I recall something similar happens in CD audio stages, where oversampling and a slight dither reduces quantisation noise and provides better resolution.

2 Likes

Chris, I only have a very basic knowledge about the drizzle integration algorithm and no clue about its implementation, but your reply made me think about it. Do you know if it’s based on the same principle? Thanks.

I can’t quite recall - doing a quick look:
“Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD.”
I guess that there is no need to add noise to our sensor signal, there is already plenty about before ADC conversion.

Of course it’s possible.

Consider a very simple case. Let’s say that you have a 2 bit camera with zero read noise and unity gain and no quantization error. For each pixel on the camera, each pixel can have the value 0, 1, 2 or 3 - and nothing else.

Now, let’s say that you take 16,000 exposures, taking care to not saturated any pixels (they might be really short exposures). Now do a simple sum of each of the frames using software that stores 16 bits per pixel. The dimmest pixel could be 0, and the brightest could be 48000. And any values in between could exist. You would have the performance of a 16 bit camera with a FWC of 48000.

In the real world, the read noise and quantization error matter, so the math is not as simple. But the principle still applies.

1 Like

@buzz If you have a 12 bit camera, I think you can agree that there is only 12 bits of information in each image. Right?

So now stack a bunch of them. All the images you combine have no information about the 13th bit. So, where does the information about the 13th bit come from?

Stacking does not create information that was not in the original images. This is basic information theory. Stacking does average out the noise - that information was in the original images. When you average 12 bit images, you get an average of 12 bits - no extra bits of information are created. The same is true if you stack by adding - you don’t create new information.

Don’t confuse data with information. Averaging a bunch of images can create more than 12 bits of data but that is not new information.

@wadeh237 You are confusing data with information. Each exposure as only has only 2 bits of information. Add as may as you want - it does not create new information - just more data. The original 2 bits of information does not predict any other bits. Combining them does not change that.

On the other hand, doing overlapping HDR exposures does create new information. But just repeating the same exposure does not predict any new bits - no new information is created.

Think of a camera without noise. If you take lots of exposures, every one will be the same value. So combining them adds no new information because they are all the same. Now consider a camera with noise. Taking lots of exposures can average out the noise but there is still no new bits of information being created because the basic underlying noiseless image is the same in each exposure.

This is not true at all.

There is a statistical distribution to the photons hitting the sensor. If you are exposing to not saturate the brightest parts, most of the pixels in my hypothetical 2 bit camera will read 0 most of the time. But once in a while, they will likely read 1, or maybe even 2 on rare occasions. Over the course of the 16000 exposures, you will collect valid data.

I’m not an expert on the math, but I believe that if you have two cameras side by side, both where the camera itself contributes zero noise, with one of them 2 bit and one of them 16 bit, exposing them for the same total duration will yield the same image (within statistical probability) as long as none of the exposures saturate any pixels.

My example is hypothetical, but the concept is not.

Take a look at what Stan Moore (author of CCDStack) has done with an emccd camera. He’s effectively taking 1 bit frames. Each pixel is either 0 or 1, and is getting excellent images at resolutions not possible with conventional cameras.

In particular, take a look at the 1 bit M57 image in the above link. Then, you can go here to see images that he’s taken with this setup. The M57 at the bottom of the second link is from a stack of 180,000x1 bit images.

2 Likes

Artifice of maths or not, stacking images improves the tonal resolution due to the random nature of light, noise and quantisation effects. The outcome is as if you sampled a less noisy image in a higher bit depth. I don’t care about the semantics, only the image.

Those interested in single photon counting might be interested in this:

That is by the inventor (or one of the inventors anyway) of the cmos imaging sensor.

See also here:

https://www.gigajot.tech/

@buzz & @wadeh237 Yes, average stacking helps remove random noise.

@wadeh237 If you do stacking by addition (rather than averaging) then you are “counting photons”, however, additive stacking also accumulates the noise rather than reducing it as averaging does.

In today’s world, there is no significant difference between averaging and adding.

Back in the days of 16 bit integer math in the stacking software, it was necessary to do averaging to avoid overflowing the pixel values. With the current 32 (or even 64) bit floating point arithmetic used by modern software, you would get the same result if you added all of the subs together and then divided them by the number of subs as you would get if you averaged each of the subs at stacking time. Since 32 bit floating point math is such high precision, you might get some small rounding differences, but they would be completely insignificant.

To your point that noise accumulates in the addition case, that is true. But signal accumulates even faster. It turns out that the signal to noise ratio is exactly the same for averaging a stack of subs as it is for just adding them together (again, to within the precision of the underlying arithmetic).

Now if you want to do something like a median combine, or other statistical combine that is not simple addition or averaging, then you could have a difference.

1 Like

As Wade stated above, it is really the same difference. You increase the SNR, dynamic range, tonal range, bit depth, precision, etc. of the image as you stack. All of it improves with stacking.

Wade’s earlier example of stacking 16,000 2-bit subs was great. Try it, you could prove the concept easily enough with one series of 2-bit numbers representing a single pixel sampled across many subs. You will find that the amount of information in the stack definitely increases. Doesn’t matter if you simply add (the “maximum” value will increase well beyond the maximum of 7 for a 2-bit number…and increase in information), or average (your integral input will average down to a higher and higher precision floating point value, even though it stays within the original bounds of the 2-bit range), it is really the same difference in the end.

Dynamic range and bit depth simple are a way of describing the number of useful steps of information in the signal. Doesn’t matter if you average, which reduces the noise in a literal sense, or simply add, which shifts the maximum value higher and higher. Same effective result in either case…you can represent more finely delineated information in both cases.

Bit depth is also really not the issue at all. Quantization error is really what matters. Bit depth is just the size of the digital number we use to represent the information. If the original source doesn’t require as much as your bit depth provides, then bit depth essentially becomes meaningless.

Consider a 12-bit camera like the ASI183. At a low gain, quantization error is high, which means your information in any given sub is going to snap to very few discrete values, and could easily fill the full output range (i.e. 12 stops). You can overcome this high quantization error by stacking with high precision floating point variables, and you will converge on a much more precise value for the true signal that is of a much higher effective bit depth than the original source information…however, you might need quite a few subs to do it.

At a high gain, however, quantization error is low, which means your information in any given sub will not “snap” much at all, and will represent accurate real-world noisy values sans the error introduced by quantization error. At a high gain, your maximum DR may only be 9-10, maybe 11 stops…so, you don’t even need the full precision offered by a 12-bit ADC. If you don’t NEED the full precision offered by a 12-bit number, using an even higher precision ADC unit is meaningless.

As a matter of fact, I measured, and plotted, the noise, with it’s quantization error, for all the major gain settings of an ASI183MM Pro. These plots demonstrate how little the bit depth of the ADC matters once your conversion ratio becomes high (i.e. 0.25e-/ADU) rather than low (i.e. 4e-/ADU):

Note how at the low gains, the snapping to common, broadly separated discrete values is common? And note how at higher gains, there is no evidence of any snapping to discrete values at all, and the full real-world precision of the signal, with all of it’s noise, is accurately represented? FTR, these are all measurements of subs taken with exactly the same camera…an ASI183MM Pro, which uses the Sony IMX183 12-bit sensor.

At a high gain, bit depth is effectively meaningless, you are already resolving the information the camera is producing as accurately as could be useful, at least on a per-sub basis. Stacking will then be more effective, as you don’t have to first overcome quantization error in each sub…you simply average out the real, random noise.

Another quick example. If stacking was incapable of increasing the amount of information, then stacking would actually be entirely pointless, providing no value, and this example of how information (its range, precision, and accuracy) definitely increases with stacking would be impossible:

@jon.rista Your last image example shows exactly what stacking does. It improves the signal to noise ratio so stacking is not pointless. An improvement of signal to noise does bring the image out from the noise as your example shows. Stacking improves the signal to noise by the square root of the number of samples. There is no disagreement on that point.

What I have said all along is you can not improve the resolution of the image beyond the resolution of the camera. That is the upper limit of what is possible by stacking.

You seem to think that I can stack more and more and improve the resolution as much as I want. The improvement from stacking is an asymptotic curve (square root of the number of samples) approaching some limit of resolution. What is the resolution limit? It’s not infinite - what value is it?