Added ZWO settings dialog?

@alessio.beltrame I’m not sure how the Central Limit Theorem applies do this. I’ll do some research.

B is the mean of the signal values which are precision limited. As I tried to demonstrate in the second part, creating the mean is not increasing the precision so it is not reaching infinite precision.

You’ve got to be kidding. Your English is likely better than mine, and I am a native English speaker.

On this whole topic, seems to me like there is a lot of semantic differences playing out here.

For my 2 cents worth, I have been imaging my ASI1600MM at gain 200 for RGB, Ha, OIII, SII, primarily to take advantage of the very low read noise. I compensate for the much reduced DR by taking relatively short exposures. For the NB the reduced DR is not much of an issue because of the much lower available signal. I avoid L because of the challenges a much larger signal causes with a reduced DR.

In practice I would prefer to do RGB at gain 100, but the lack of visual display of the gain setting on the Event detail lines I find makes it a real pain in the rear to use the gain setting by Event feature. You can’t just look at your Sequencer dialog and be sure that all your gain settings are correct.

Please SGP developers, add a GAIN column to the Event lines. You can get room by squeezing the progress column.

@alessio.beltrame I looked up the Central Limit Theorem. I stated without poof that the mean of the noise converges to zero by the square root of n. That’s where I think the CL Theorem might come into play to predict that result. The formulas I proposed show the math for simple average stacking as commonly used. If that is not valid, then there is a lot of software that needs to be changed.

As I thought about this some more, I realized I was making the formulas more complicated than necessary. The average stack of n elements is Sum(Si[I] + N[I]) / n which equals Sum(S[I]) / n + Sum(N[I]) / n The first term is the mean of the signal and the second term is the mean of the noise. Same result as before but less steps to get there. Then my analysis of whether the signal term can be enhanced comes into play.

I appreciate your comments. I really do want people to evaluate both parts of the proposed proof. The discussions here have gotten a bit testy at times but it is the give and take that has forced me to think about this with mathematical rigor rather than thought experiments. However, one can create BS with math so having other people evaluate it is important.

@jmacon A lot of this thread has been about theory which can provide guidance, but, it helps to be reminded at the end of the day everyone should use what works for their combination of equipment.

I could use a beer. Been one of those weeks overall. :beers:

It has been an instructive discussion- it is good to challenge each other. That is how progress is made, even if we do not all agree. It certainly has improved my understanding of working with CMOS cameras. I have yet to buy a large chip and when I do, one of the things that I need to do is determine three settings:

One will likely be the lowest gain setting - for high dynamic range

Two will likely be a ‘sweet spot’ that was mentioned earlier

Three will be a nebulosity one - this is the tricky one. As another poster observed, if you set the gain two high, changing background conditions can cause you to prematurely clip. There probably is a higher gain setting that is in the Goldilocks zone.

I can see there is certainly an interesting experiment ahead, comparing the results from different gain/exposure settings over a fixed session time.

Thanks! I was kinda joking, however devil is in the details and I would like to know much better the nuances of English language. Those little things that are easily misunderstood by a foreigner, particularly when you can’t see facial expressions.

If it works for you, you have my blessing. After all, this is a hobby, no need for dogma here.

I’d like that too. On the other hand, I was a software developer eons ago, so I understand that Jared and Ken might be a little bit conservative regarding new features.

Sorry, that’s not correct. Stacking simply adds up pixel values, it doesn’t have the faintest idea about S[i], N[i], B and so on. The entire photon collection process is a totally random process. Even in a perfect camera, the arrival of photons would be random, which is what we call shot noise. You can estimate noise based on statistical ground, given a sufficiently high number of exposures, but you’ll never know its value. That’s way you can’t simply add random variables, you can only determine the probability that their sum fits inside a range that you specify.

Well, at last something that we agree upon 100% !!! :wink: :sweat_smile:

Let’s look at this step by step and see where you disagree.

  1. Each pixel is some combination of signal and noise S[i] + N[i]
  2. Stacking by averaging is Sum(S[i] + N[i]) / n
  3. #2 reduces to Sum[S[i]) / n + Sum(N[i]) / n
  4. The first term of #3 is the mean of the signal the second term is the mean of the noise.
  5. If stacking increases the bit depth of the signal then the mean of the signal would contain it.
  6. The mean noise decreases to zero by the square root of n - from the literature

The only assumption has been is that each pixel has some non-random signal and some random noise. There is no assumption that the signal in any pixel equals the signal in any another pixel nor that the noise is correlated between pixels. After step 1, we are doing algebraic operations that duplicate what stacking does. Stacking does not know about each S[i] and N[i] - just the sum of them. Further the result shows that it is the sum of the mean of the signal and the mean of the noise. We can’t isolate each component - we just know the sum.

Your discussion about statistical noise would apply if we where attempting to determine the actual noise values. We are working with the sum of the signal and the noise and that is known. Each pixel has to have a signal component or we couldn’t produce an image.

So in what specific step do you see this going wrong?

Unfortunately, a dinner with superfine italian food and wine is waiting for me. But I’ll answer to you tomorrow, I promise!

Quick spoiler: #1 could be wrong (depending on actual meaning of S), #3 is surely wrong. Also you can’t assume that:

I’ll be back…

Great, enjoy that dinner.

So if the pixel value does not contain some meaningful signal. How is it we are able to create an image?
If the pixel values where pure noise, there is no image. If the pixel contains some useful data, that is our signal S, everything else is N. We don’t know the value of each component S and N but we do know the sum.

#3 is an algebraic identity. That its:

(S1 + N1 + S2+ N2 + …. Sn + Nn) / n =

(S1 + S2 +… Sn + N1 + N2 … + Nn) / n =

(S1 + S2 +…Sn) /n + (N1 + N2 + … Nn) / n

For example: (22 + 79 + 84 + 100) / 4 = 71.25

(22 + 79) / 4 = 25.25

(84 + 100) / 4 = 46

25.25 + 46 = 71.25

Here we go. It’s rather long (8 pages) and full of maths, so I’m attaching a PDF: explanation.zip (128.6 KB)

Anyway, I’d recommend reading an introduction in statistics, which is not algebra (hence the trouble with #3 that I mentioned in the short spoiler, but there are also additional flaws in your “algorithm” - see the doc).

Really my last try. I can’t afford to spend another minute on this. Over and out.

2 Likes

Very nice analysis - that took a bit of time to create. What I conclude from this is that as I increase the number of exposures I am tightening up my curve of shot noise etc. No disagreements so far.

What I don’t think it suggests is that we can increase the accuracy of our estimate beyond one of the discreate binary values of the ADU. Your assertion of the random nature of the data feeds into my probability analysis of the mean of discrete values. What I think I demonstrated in the second part was that averaging random discrete values does not increase the accuracy beyond the limit of the smallest discrete binary value of the ADU. In the abstract we can increase the accuracy of our estimate forever but we are limited by the discrete values we get from the ADU.

To review, when dealing with random values if I take the mean of the discreate values 2 & 3 and get 2.5 the probability of .5 being correct is 1 in 10 which is the same as if I guessed. I further showed that increasing the number of samples does not increase my chances of getting the right value beyond a random guess. So ultimately our accuracy is limited by ADU binarization and adding more samples does not change that.

Your math here is wrong. You keep adding noise and signal together directly. Alessio has tried to correct that as well. Adding noise directly to signal, linearly, is incorrect. Adding noises to each other linearly is incorrect. They need to be added in quadrature.

The simple formula for signal to noise ratio is:

SNR = S/N

S = S1 + S2 + … + Sn
N = SQRT(S1 + S2 + … + Sn) // for poisson distribution (i.e. what we have with astrophotography)

Therefor, more simply:

SNR = S/SQRT(S)

If we have additional sources of noise (i.e. read noise, which itself can be broken down into various specific noise terms with known distributions):

N = SQRT(S1 + S2 + … + Sn + N1^2 + N2^2 + … + Nn^2)

Therefor, more simply:

SNR = S/SQRT(S + N^2)

If we have additional unwanted signals (i.e. skyfog, dark current):

N = SQRT(S + G + D + N^2)

Where G is the skyfog (airglow, light pollution, etc.) signal, and D is the dark current signal.

Therefor:

SNR = S/SQRT(S + G + D + N^2)

You can see why light pollution and dark current only diminish SNR here.

If we have fixed noise terms (i.e. FPN) that grow according to known factors (i.e. DSNU, PRNU):

N = SQRT(S + G + D + N^2 + (S * DSNU)^2 + (S * PRNU)^2)

Therefor, final SNR without correcting FPN:

SNR = S/SQRT(S + G + D + N^2 + (S * DSNU)^2 + (S * PRNU)^2)

You need to get the math right first here. Noises add in quadrature. You cannot add a noise directly to a signal, you cannot add noises directly to each other. Your conclusions are wrong because your math is wrong. :man_shrugging:

1 Like

:clap: Very nice!

1 Like

Not really. Let’s take a very easy example: casting a single die. There are 6 possible outcomes of this experiment: 1, 2, 3, 4, 5, 6. Each of them has the same probability (⅙), so we call that a Uniform Distribution. Now, repeat the experiment one thousand times and take the average of the outcomes (just like stacking). You will get a result very close to 3.5.

In this specific example there are no underlying quantum mechanics mysteries. We can calculate exactly the mean of our distribution given by:

1⅙ + 2⅙ + 3⅙ + 4⅙ + 5⅙ + 6⅙ = 21/6 = 3.5 = µ

For the Central Limit Theorem, the average of our experiments (rolling the die) has a statistical distribution that can be approximated by a Normal distribution with mean = µ and a standard deviation that becomes vanishing small as n gets to infinity.

We started with discreet (quantized) values (the numbers 1 to 6 on the faces of the die), but we found a mean that is a real number (in this specific case it is actually rational, i.e. fractional). The quantization of the outcomes of each single experiment does not limit the precision of our result, as long as the number of repetitions is sufficiently high and we are not using a loaded die.

In your line of reasoning you keep thinking of a single exposure, which is of course limited by the ADC resolution. But if we stack pictures the information is not encoded in each single image, is encoded in the entire stack as a whole. In the previous example, casting the die one time won’t give you any meaningful information. It’s the “stack” of multiple tries that allows you to get near the “true” (improper term) result.

Really, you should take an introductory course in statistics or at least read some good tutorial. You may find a very good source of free information at https://www.khanacademy.org

1 Like

Thank you so much Jon!

Something else to consider is that, with astrophotography, there is always a fainter signal. While we may converge on a highly accurate value for a bright signal, say the core of M42, and further stacking may not provide any tangible benefit to the core of that nebula itself…there are fainter details around the core. And there are even fainter details around those details. And there are even fainter details farther out around those details.

There are physical limitations with both film and digital image sensors. To capture ever fainter details, we cannot simply expose for longer. Eventually we saturate the film or the sensor entirely. Further, the longer the exposure is, the more likely it is to be tainted by some kind of frame intrusion…meteor, stellite, or airplane trails. You end up with more hot pixels. Glows could appear (even for very clean CCDs).

This is where stacking becomes a key benefit. Stacking, since it does increase information quantity, allows us to avoid trails and hot pixels, at least to a degree, and when they do happen, we can apply other statistical processes to reject outlier pixels and eliminate those unwanted artifacts.

But more importantly, when it comes to faint details. You may only get one photon every minute…or even every few minutes, or even every 10 minutes! There is always a fainter signal. With ultra faint signals, you need to stack a lot more so that the signal becomes strong relative to the noise. This would be impossible if stacking did not increase the amount of information, or if the bit depth of the hardware was some kind of concrete wall. One photon per several to tens of minutes might mean one photon every few subs. With most cameras, that means those photons represent sub-ADU level information.

If the ADU of the camera was a hard limit on the precision of the data, then there would never be any chance to reveal those faint details. They would always be smaller than 1 ADU. That has been demonstrated, many times, to not be the case. In fact, even amateur astrophotographers routinely reveal details that require much greater precision than allowed by the hardware in each single sub.

So while stacking 100, 500, 10000, 80000 subs may not improve the quality of brighter signals by any meaningful degree, it can indeed allow you to reveal signals many times fainter than the camera can detect in a single exposure. There is always a fainter signal.

This image is over 400 subs stacked. Skies were ~18mag/sq". The faintest object is around 22mag/sq", a few between 21-22mag/sq". Dozens of 19-21mag/sq" objects. These are all many times fainter than my skyfog limted background, and without stacking as many subs as I did, the ultra faint ones would never have appeared:

Many of these objects require finer precision than can be produced by the 12-bit ADC of my camera. Most of the 20-22 mag items would fit within a single ADU of a single sub! I could continue to integrate, as there are even fainter objects in this field. And I could continue to reveal fainter objects. It would be a monumental task, though, to get to 23mag/sq", as I would need to stack over 2500 subs! :stuck_out_tongue:

1 Like

Love that Jon! :clap:

@alessio.beltrame With a degrees in engineering and computer science, I have quite a bit of instruction in those areas so you don’t win the argument that way. It takes convincing science.

The mean of 2 & 3 is 2.5 but what is our certainty that the .5 is correct? If our values of 2 & 3 where actually 2.000… and 3.000… then a mean of 2.5 is quite correct. However, in this case the values are really 2.???.. and 3.???.. So the mean is uncertain beyond the decimal because of these random digits. That is, we have introduced uncertainty because of the following random digits. I further extended the argument to larger samples. Probability is the basis of statistics as you likely know. So look at my basic claim about the probabilities involved. If those are correct, the statistics follow from that. When we are stacking images we are increasing our knowledge about the lower digits but our chances of knowing below the lower digits are no better than random guessing. If the digits below the ADU digits are random, then in a large sample the combination would tend to an average of .5 which does not increase our knowledge of upper digits.

Chris, Jon, myself and others have provided extensive theoretical and empirical proof of our claims. Much more then is really practical for a forum topic.

It stops here for me. Really. Goodbye.