Added ZWO settings dialog?

Too bad, it’s been a spirited debate which does force one to refine one’s thinking to be more rigorous.

Unfortunately, “because we said so” does not resolve the debate. It’s more a response I would expect from one’s parents.

O_o That is not what he said. He said we provided extensive theoretical and empirical proof of our claims. In other words, we have provided the underlying theory as well as evidence that supports our claims.

That is a LOT different than saying “because we said so”, in fact, the exact opposite…

This is why the conversation has ended. :man_shrugging:

And I will tell you that I “provided extensive theoretical and empirical proof” also. No one has addressed my last post which I think tied down counter claims. Asserting one’s claims in bold does not make them immune to challenge. That is what scientific debate is all about.

If I take the mean of the values 2.??? and 3.??? I can only say that the mean is between 2 and 3. I can’t say the mean is 2.5. If I take the mean of, 2.???, 2.???, and 3.???. The mean is between 2 and 3 and not 2.333. Adding more data points does not solve the uncertainty. The belief seems to be that if I had enough 2.??? values that would be evidence that the value was closer to 2.000. However, with enough data the ??? part of the numbers average to .5 because they are random thus providing no meaningful additional information. If you wish to debate this point, please do so by addressing some fault in the argument.

With respect, there is no scientific debate here.

There is a well-understood correct answer to the question. There are others in the conversation who have very patiently and politely explained that correct answer and provided references for additional study that you can use to educate yourself. Only you are debating it.

I, for one, am appreciative of their efforts. I already knew the correct answer at a high level, but it’s nice to have the math behind it. It’s been many years since I did any serious math, and I never studied statistics formally. I found it interesting reading and an excellent summary showing application of the relevant statistical math. I’ve come away enriched by it.

1 Like

Yes, and as the OP for this thread I think it is time to move on.

We all know DesertSky’s assertions so I am hoping to once again ask questions and get input from others without further debate on the merits of stacking as it relates to DR.

Please see my post:

And give feedback (not heard before, e.g. I get some recommend expose to just below saturation).

Thanks,
Z

@wadeh237 “There is a well-understood correct answer to the question.” - apparently not.
So you are not willing to directly address the counter arguments I have given? You seem to be saying my mind is made up - don’t confuse me with any additional input.

I have not made any argument that is counter to any statistical facts of imaging. In fact my position takes advantage of statistics in that the random signal below the ADU bits averages to .5 and thus provides no new information. If the lower bits below the ADU bits are not nearly random, then they may not converge to .5 which could invalidate the argument but I’ve not heard anyone make that claim.

Using what statistical principle do you assert that the mean of 2.??? and 3.??? is 2.5 ? (Adding more data points does not increase the certainty of the value)

Reflexively telling me I’m wrong does not further our knowledge. Please reread and think about what I have been saying. If there is a legitimate fault in the argument, I’d like to know about.

A lot of the recommended numbers originally come from me, in one form or another. Sadly, a lot of it has been taken as “absolutes”, when really my original goal was simply to provide good starting points for people. When it comes to CMOS cameras, I used to recommend 20xRN (or twenty times the read noise) for background sky. That tends to be a bit inconsistent, since it is not relative to the read noise squared. It works well for narrow band with low noise CMOS cameras, however it doesn’t scale well across the gain spectrum.

There are also many long-standing guidelines for exposure with CCD cameras, and they still generally apply to CMOS. At the core, exposing your background sky to 3xRN^2 is an effective “minimum” to get a useful SNR for given sky background, however it can leave images noisier than optimal. For more optimal results, exposing your background sky to 10xRN^2 is recommended. I usually tell CMOS beginners to aim for background sky levels no less than 3xRN^2, so that they at least have sufficient SNR to make some nice images, even though it may not deliver optimal SNR.

For those looking to get better SNR, I recommend exposing to between 5-10xRN^2, and if you are more tolerant of clipping (I guess I myself am a little picky about it, even though I generally prefer faint details…and really, once you stretch the data, you will find that some clipped stars usually don’t matter at all, even if it’s a dozen or two stars…so long as the clipping isn’t too severe; caveat that, even with severely clipped stars, such as the Pleiades, it is often not bad unless you want to resolve fine detail right around the star, such as Barnard’s Nebula near Merope). As such, you can expose well beyond 10xRN^2 if getting the most optimal background sky is your goal. I know some people who prefer to get closer to 20xRN^2, and I myself, when doing L subs under high LP, have in some cases had as much as 70xRN^2 (and that was with 45-60 second subs at a low gain! :P)

So, key thing here, these are just guidelines to help you get started. And general rough bounds for background sky levels. Ideally, never expose your subs less than necessary to get 3xRN^2. For many years (longer than I’ve even been in the hobby), 3-4xRN^2 was considered ideal for many higher noise CCD cameras (and we are talking 9, 10, 15e- read noise, sometimes dark current levels as high as 0.5e-/s…so, quite noisy by todays standards!) Swamping 10-15e- read noise by 3-4x was a lot more difficult…and required subs at least 20-30 minutes long with fast scopes, and possibly 60, 90 minutes or more. Today, with CMOS cameras, we can swamp the read noise by 3-4x pretty easily, in just a few minutes. Swamping by 8-12x is also within the realm of possibility, at higher gains, with slight clipping (I usually swamp that much with 3 minute NB subs at Gain 200 with an ASI1600).

The formula for calculating range your background sky levels should fall within are:

Min = ((S * r^2 / g) + o) * 2^16/2^b

WHERE:

S: swamping factor (i.e. 3, 4, 10, 20)
r: read noise (e-)
g: gain (e-/ADU)
o: offset (ADU)
b: ADC bit depth (usually 12 or 14 for CMOS and DSLR cameras; DSLRs are often not scaled to 16-bit, beware!)

In order to use this formula, you will need to know the gain and read noise. For quick and dirty calcs, you can use tables provided by astro camera manufactures like QHY or ZWO, as both usually provide gain and read noise tables. Easy enough for beginners or those who just need a general starting point. If you want to get more accurate, if you have PixInsight, you can use BasicCCDParameters to calculate these factors.

Calculate the minimum and ideal, to get a general idea of what kind of background sky levels you should be measuring in your imaging program. Drop the scale factor (the last term: 2^16/2^b) entirely if using a DSLR with something like BackyardEOS/Nikon, as it will measure in the native bit depth of the camera. PixInsight will usually also just load 14-bit data without scaling into 16-bit space (well, technically, it is 32-bit float in PI, although you can measure in 16 bits, as well as a wide range of other bit depths).

Once you have the range of necessary and optimal background sky levels, then you can simply measure this in your subs. Note that this changes with gain. So if you calculate that you need 580 ADU for a lowish gain, then switch to a higher gain, you need to make sure you recalculate based on g for that higher gain. Higher gains convert fewer electrons to more ADU, so you will usually need to measure higher background levels at higher gains, however it is not simply a linear change, as read noise falls as gain is increased as well (so you need less signal to swamp it).

And, MOST IMPORTANTLY, remember this: These are guidelines! Adjust your exposures until you figure out what works best for you. Some people cannot tolerate even a single star clipping…and they will usually be stuck at 2-3xRN^2 swamp. Other people don’t mind a fair amount of star clipping, and they are often at 10x or more. Test your integrations, stretch them the way you like and check out the stars. You may find that even clipping 20-30 stars twice as much as you thought was “bad” is imperceptible in the final integration. Processing tricks can mitigate clipping issues even more. So usually you can tolerate a lot more clipping than you think, unless you use more unique and advanced forms of stretching such as masked stretching or arcsinh stretching (which can make clipped stars stand out like sore thumbs.)

Do some experimentation at your key imaging sites. Do some integrations, and see what delivers the kind of results you want. You will eventually find the right settings as well as the right exposure lengths. Once you find them, you can then stick with them. That will minimize the amount of calibration frames you need to take, and simplify your workflow in general. But, DO experiment.

Regarding offset. I generally recommend for most people these days to keep it simple. Figure out what the default offset is for the highest analog gain on the camera (note, with ZWO and QHY cameras, the highest analog gain is usually not the highest possible gain setting. For an ASI1600 or QHY163, the highest gains are 600 and 550, respectively…but the highest analog gains are 300 and 270 (IIRC), respectively!) Then use that offset for all other gains. The only time when an adjustable offset really matters is at higher gains anyway, and usually the offset is set just slightly too low to avoid clipping any pixels at all (which is generally more optimal than setting it so high that you don’t clip anything, as DR is a precious resource at higher gains.) At lower gains, a high offset is basically meaningless in the face of much larger FWCs, so DR is barely affected. I use an offset of 50 for both my ASI1600 and ASI183, at all gains. It keeps it simple. For the ASI183, if you need to use higher gains, then adjusting offset ABOVE 50 is actually beneficial, and the newer SGP settings that allow you to set offset for ASI cameras could be quite useful. Higher offsets for higher gains on the ASI178 have a similar benefit.

For DSLRs, offset is usually fixed. It depends on the brand, and sometimes even the model. For Canon, I think the offset is 2048 14-bit ADU (however, it might be 512 ADU…depends, I am not sure if this is a misread in PI or not). For Nikon, many of their newer cameras have an offset of 600 14-bit ADU, however many of their older cameras have an offset of 0 (black point clipping…bad for calibration.) This offset is usually used for all ISO settings.

As for CHANGING settings. At a given site, you generally shouldn’t change things. For convenience, I usually pick two or three gain settings, and only use those, at a given site. Actually, in my case, I want to fiddle with settings as little as possible, so I actually use those gain settings everywhere. I have a minimum gain setting, moderate gain setting, and high gain setting, that I use with most astro cameras. I always use the same high gain setting for NB. I use either the low or moderate gain setting for LRGB, depending on what I want to do. I often aim for higher resolution with shorter subs and heavy culling these days, so I lean towards using a moderate gain. I am often only getting 10-30 second L subs, and 30-90 second RGB. The L subs are key for resolution…get lots and lots of them, then cull softer subs aggressively to optimize resolution.

Finally, another tip from my own processing over the last year or so. Let the data speak for itself…at least once…before you impose your idea of what things should look like on it. One thing about astrophotography that I have noticed over the years, is we often force-fit images of deep space objects, be they narrow band or broadband, to conform to pre-conceived notions about what color any given object “should” be, or what color blend “is best”. I have stopped trying to force-fit most images to any preconceived notions, and I try to do very minimal processing, so the data just speaks for itself. I find this is more enlightening…as I am often finding things out about objects that I had never known or seen before. Small or fine structures, nuances of color, etc. that I was often obliterating with processing before. Narrow band data is often quite a bit different when you combine it with very neutral processing techniques…and you often find that the ratio of Ha intensity to SII and/or OIII intensities is much smaller than often depicted in “popular” ways of processing. Sometimes these pictures are not quite as interesting…however, with a neutral processing technique, you will usually find that you never have problems with blue stars or overbearing halos, or even purple stars. A lot of that is due to the attempt to force one channel to become bright and saturated and powerful…when in reality, the signal contained within it is simply not that powerful. There is a lot of opportunity to discover what objects really look like, what their structure and color really is, with a minimal approach to processing.

So, at least once…take a neutral approach to processing, and let the data tell you (or teach you) something. I usually process my images many times, and somewhere along the line I’ll do more “pretty picture” processing to make an image look great, rather than “be accurate”. Seeing what the data tells you first, however, can help guide your later processing, since you may have a better and more realistic understanding of the object.

3 Likes

This all started back in May, with the simple question, from small acorns giant oak trees grow!!!

May 3

I see:

“Added ZWO settings dialog”

in the release notes for 3.0.2.81, but I don’t know what that means or where to find it.

The ASCOM settings for my ZWO camera look the same.

Please advise.

@alcol,

It’s not in the ascom driver, it’s in the event settings.

There was a screen shot here:

but in the event, click on the gear, then at the bottom, event settings.

For me with ASI1600 ASCOM the offset is grayed out. If I use the native driver, instead of ASCOM, I can enter a value but when I look back it has the gain if both the gain and offset boxes.

Bug?

3.0.2.91

I’ll need to retest that, turns out the camera wasn’t on the scope when I did that :flushed:

Jon:

For the ASI1600 if i input this:

S 10
r 1.5
g 0.506479725
o 50 min:
b 12 1511

I get 1511, which feels right, but for “ADU as shown in SGP” don’t I need to multiply that by 16?

Thanks
Z

Again, it is just a guideline. It doesn’t have to be 100% exact. You could get a lower background level than that, or a higher. That is ok, and on most nights, you will fluctuate some. Getting this value perfect is not as important as the importance it has been given since the ASI1600 first came out. A background sky level of 1500 to 2000 should be fine, and if you feel you clip too much, then 1000, or 1200, would also be fine.

The main thing here is, now that you know you need about 1500, is to figure out…what exposure gives you that value, on average, on most nights? You will need to experiment a bit to figure that out, but the really important thing (at least IMO) is to find an exposure length that gives you enough background sky signal…AND that you can use again and again at the same gain. You don’t want to always be using different exposure lengths, as you would then always need to generate different darks. Find one single exposure length that gets you what you need, then stick with it. Sticking with it will really help simplify your workflow.

1 Like

So the formula gives the ADU as displayed in SGP?

e.g. 1500-2000 is what I want to see as the median value in SGP?

The 2^16/2^b term handles the scaling that SGP does from 12 to 16 bits, so you do not need to adjust the number further.

I’m using gain=76 and offset=40 for LRGB and gain=200, offset=50 for narrow band, and have calculated my gain/read-noise as 0.478e-/1.424e- and 1.972e-/2.064e- respectively. For 10x swamping, I am looking for background levels of 1479 and 986 respectively.

For your calculated gain and offset, 1511 sounds about right. As Jon mentions, these numbers are just guidelines. I don’t like too many saturated stars, so if there are bright stars in the frame, I may go with shorter exposures as needed.

Thanks Jon - I had wondered about that at about post 80. Just couldn’t tear myself away from the cold beer to find my university notes.

OK, all that reinforces what I had previously understood the table I was using was for, and now I understand the math behind it.

For imaging with LP, is my approach to checking the median value of skyglow at the highest altitude of the target, ahead of time, valid?

In other words:

  1. The median value we are trying to approximate is representing skyglow and not the target itself

  2. The sky will be darkest at the highest altitude the target will reach during planned imaging, so setting the exposure there will insure that we are at or above the calculated median value at all times during imaging.

I guess to be less rigorous and get to more standard exposures, regardless of target altitude, just use the zenith?

Despite being blown off. I’m going to add a couple outside references to bolster my arguments that you can’t improve the stacking bit depth beyond the input data resolution.

  1. Significant Digits which says
    “You can’t improve the precision of an experiment by doing arithmetic with its measurements.”

  2. How much precision in reporting statistics is enough? - PMC which offers analytical proof for the statement
    “Therefore, the mean value cannot be reported with a precision higher than that used in the measurement of the raw data.”

The last reference pretty well says it all - after stacking the resulting precision is limited by the bit depth of the camera.

Looks like this applies for a single data set. For example…a SINGLE sub (alessio already schooled you on this earlier in the thread.) And it is about rounding error and reporting, for a given standard deviation and variance.

Simple fact: Standard deviation and variance CHANGE when stacking!! Again, check Alessio’s document. He demonstrates very nicely how the variance and standard deviation change when stacking. Stacking simply accumulates more information. More samples for the data set.

If one were to produce a statistical report from a single sub, it’s precision would be limited by it’s intrinsic standard deviation and variance…i.e. the least significant digit (bit 1) would be rounding error. This is correct, the least significant digit is always a rounding error…nature of an ADC.

If one were to produce a statistical report from a stack of subs, it’s precision would be limited by the stack’s intrinsic standard deviation and variance. This standard deviation, this implying that the variance as well, will be significantly smaller than that of a single sub. Therefor, the precision is higher. The least significant bit would still be the rounding error…however that least significant bit would be of a much larger number out of a much larger valid range. Where your range in a single sub might be 0-4095, the range in a stack could be orders of magnitude larger. The least significant bit out of 0-65535, 0-262144, 0-1048576 or means a lot less than the least significant bit out of 0-4095. Again…you can simply SUM here…if you sum, the maximum possible value keeps increasing. So long as you have large enough numbers to represent that increase, then you are ACCUMULATING information. Therefor, precision must increase.

It should be noted (although I believe Alessio already did in his document)…there must be noise for this to work. If you have no noise, then you cannot improve SNR, dynamic range, or anything. Noise is necessary here. However, because of noise, it is possible.

Yeah, testing at the Zenith will do.

Nice try, but his article applies directly to stacking. We are not talking about taking the mean of a single image. We are stacking pixel by pixel to calculate the mean of a pixel so that is where “the mean value cannot be reported with a precision higher” comes in. As the other quote says no amount of math can improve the precision of the result beyond the precision of the raw data. The fact that that we are producing a tighter curve with stacking indicates we are increasing the precision of the mean up to the precision of the raw data - not beyond it. The whole point of the article was to not over state what the raw data indicates in terms of precision of a mean. Please don’t just restate everything that has already been said in this thread - it does not change or dispute the conclusion of this article.