Added ZWO settings dialog?


#202

Without noise, or perhaps more specifically, without enough noise, then what is described in the above is impossible. If you sample the same signal over and over and end up with the same exact result every time (i.e. a noiseless camera measuring a noiseless signal with no noise introduced by anything), then no…you would indeed be stuck with the precision of the ADC.

If you have noise, however (and according to this article, producing a standard deviation over 0.05), then you can increase dynamic range (which, again, is about discrete steps of information…same as bits…same thing in the end from a conceptual standpoint, different names.) With a standard deviation over 0.5, you can effectively increase dynamic range as much as you want.

:man_shrugging:

Anyway. My last post on the subject.


#203

Great, I think the articles I referenced made my point. That and common sense tells us that the result can’t be more precise than the data - noise does not increase precision.

Now back to my original point so may posts ago. When you increase the gain, you decrease the dynamic range (bit depth) and you can’t get that back via stacking. So you can’t increase the gain and take more exposures and expect the same dynamic range. That’s not so say that such an approach can’t be successful because more exposures do reduce the noise. There is some crossover point for dynamic range and noise.


#204

@DesertSky:

Re: “There is some crossover point for dynamic range and noise.”

I think you should read this:

https://forums.sharpcap.co.uk/viewtopic.php?f=35&t=456

For the approach used in another tool, with the goal of recommending exposure time, based on stacking and noise.


#205

@Jon Rista,

Thanks for sticking it out and answering my questions directly.;0)


#206

You are welcome. Did my answers help at all, or do you still have questions?

=====

Regarding noise and resolution, for those interested, this is a good article:

http://www.analog.com/en/analog-dialogue/articles/adc-input-noise.html

Digital Averaging Increases Resolution and Reduces Noise

The effects of input-referred noise can be reduced by digital averaging. Consider a 16-bit ADC which has 15 noise-free bits at a sampling rate of 100 kSPS. Averaging two measurements of an unchanging signal for each output sample reduces the effective sampling rate to 50 kSPS—and increases the SNR by 3 dB and the number of noise-free bits to 15.5. Averaging four measurements per output sample reduces the sampling rate to 25 kSPS—and increases the SNR by 6 dB and the number of noise-free bits to 16.

We can go even further and average 16 measurements per output; the output sampling rate is reduced to 6.25 kSPS, the SNR increases by another 6 dB, and the number of noise-free bits increases to 17. The arithmetic precision in the averaging must be carried out to the larger number of significant bits in order to gain the extra “resolution.”

Digital averaging is what we do with stacking. For as long as I have been in this hobby, it seems to be pretty common knowledge that averaging noisy signals increases effective bit depth, and that that effective bit depth can go beyond the bit depth of the ADC. Dithering ADC units actually use this very concept to improve their resolution.


#207

You are welcome. Did my answers help at all, or do you still have questions?

Yeah I think I’m good for now.

Remaining would be around median ADU prediction with different gain and/or exposure time, given results at one gain/exposure.

I can do some experiments, now that I understand what the “gain” numbers on the ASI1600 actually mean.

The goal is to minimize the time determining proper exposure, but I understand that that is sort of the opposite of standardizing on a few good combinations. Let us assume that it is a good trick for a new site with new gear.

e.g. using high gain short exposure to estimate the proper exposure time at a lower gain.

or to put it even more simply, how not to spend 30 minutes (or even 8 minutes) figuring out that 8min exposures are needed.

I suspect “median” may be a little slippery (seem non deterministic) because of saturation, hmm, or maybe that’s why median is better than average for this?


#208

@dts350z Thanks for that reference. It is very well written. However, it does not address the question of dynamic range vs noise. We need to be able to estimate the effective improvement in dynamic range with noise reduction vs the dynamic range of the camera. That is another tradeoff that needs to be considered in choosing a gain.


#209

The difference between a median and a mean (or average) is that one is a selection, and the other is a computation. The median is the selection of the middle value in a set. The mean (average) is the computation of a new value that represents the middle of a set.

The former can have an a greater error associated with it, however since it is a selection, it can also ignore large outliers (i.e. hot pixels), and in some ways despite the error could be more accurate. The latter computes a new middle value from all the values in the set, and while it’s intrinsic error may be lower, it can be influenced by larger outliers, so it may not be as accurate as it could otherwise be. It depends on the data set.

I wouldn’t over-think this, though. You don’t need to do this repeatedly. You only need to figure out what your optimal exposures are once. Then you can use those values, from that site, for years. I’ve been using the same general exposures with the ASI1600 for years now. I experiment here and there, as I am always optimizing, but generally speaking, I have used the same settings for years. You might want to spend 30 minutes figuring out the options and picking the ones that best suit you…because you do it once, then you just know, night after night, what settings to use, and you don’t waste time using settings that might not be optimal.


#210

could not resist…:slight_smile:

“I really didn’t foresee the Internet. But then, neither did the computer industry. Not that that tells us very much of course–the computer industry didn’t even foresee that the century was going to end.” Douglas Adams


#211

@jon.rista Hi jon, always interested in your advice. Completely different subject

I have taken numerous subs off the Dumbbell Nebula, calibrated and registered them and produced an image, called Image 1. Say in 12 months time I take some more - the exact same image as saved on SGPro. The aim to add these to the existing image.

Because of the amount of file space required for these and other images I am checking to see if I need to keep all the calibrated and registered subs from Image 1 to enable the further subs to be added, or can I simply save the master integrated image.

Is it in order to simply register and calibrate the future subs, produce a second master integrated image and register this with Image 1 master image using Pixinsight’s DynamicAlignment? Thus not having to save all the calibrated and registered subs?

Thanks for your feedback.


#212

For best results, I would at least keep the registered subs from the previous run, as well as any previous reference frame. You may end up with a better registration reference with the new data, and being able to re-register the entire set to the new reference (or the new set to the old reference) is useful, however that would require keeping the calibrated subs as well. Alternatively, just keep all calibrated subs, and re-register the entire set each time.

I also generally recommend keeping your original subs, as well as any matching calibration frames. It is not totally necessary to keep most of the intermediate frames (i.e. calibrated, cosmetically corrected, debayered (if necessary), local normalized, etc.) Keeping registered frames can shorten future additions to an integration, but as long as you have your original lights and calibration frames, then you can always do whatever you need to do in the future.

Personally, I always keep all of my original data (lights and calibration frames) for every image. I have them for my very first image. Storage is ultra cheap. You can build a NAS with redundant drives (RAID) for pretty cheap these days, and have massive amounts of space. I have many terabytes of space storing my astro data. Sometimes improved calibration techniques come out, or new tools to do cosmetic correction, etc. You never know how you may be able to improve your images with new tools in the future.


#213

Hi Jon

Many thanks for your comprehensive response. Much appreciated.

Best wishes

Alec


#214

Because the problem is not straight cut or black and white but extremely complex. I’m favouring a less than Unity gain setting and reasonable offset for my imaging because I have excellent skies, generally around 21.97 SQM within 1 hour drive of home and a not too shabby 21.45 SQM from my garden. My decisions are largely based on information supplied in “The Astrophotography Manual 2nd edition - A practical and scientific approach to deep sky imaging” by Chris Woodhouse. I’d say this should be in everyone’s library and will certainly get you in the ballpark for actually producing fine images and then continue tweaking your own preferences for your own circumstances. Everything we do to gather images with our sensors has a considerable level of uncertainty (noise). Even within the target data and I doubt we will ever see a button that we can push to produce the perfect image. CCD or CMOS. Personally I prefer it this way.
And of course I may be completely wrong :slight_smile:


#215

One of the many interesting things with sensors is that that they are capturing photons (discreet) and the distribution is a Poisson and not Gaussian one. Therefore the standard deviation is always the square root of the mean and the distribution can always be described with one number, the mean value. Thought that may be of interest to you :slightly_smiling_face:


#216

Glad you like it :wink: The next book will delve into CMOS a lot more. I’m weighing up the purchase of a full frame or APS-C sensor for the purpose. One thing that Alessio and I have been pondering is the effect of pixel size on metrics. Sensor characteristics can be directly compared if they have the same pixel size but if not, we think that for a true comparison, you would have to normalize them - say per square micron. That would apply to read noise, dark noise (remembering you cannot simply divide by area) and full well depth - implying that the dynamic range changes too.


#217

Sounds good! Any idea when it might be available?

Regards, Hugh


#218

A few years yet. I need two good winters to do the imaging.


#219

Two good winters with our climate is a big ask!

However, the best of luck with the project and I will go ahead and order the second edition. I am pretty much an absolute beginner and books like yours are a real help.

Regards, Hugh


#220

There are no full frame mono cmos astro cam’s available yet though? You could always mod it yourself! :slight_smile:


#221

I assume that most of the discussion on this thread relating to gain and offset settings relate to the ASI1600MM camera. Presumably the values would be different to take images with the ASI1600MC camera? If so what values are being used?
Thanks


www.mainsequencesoftware.com