Added ZWO settings dialog?


Once your gain is 1 ADU per electron there’s no point in increasing it any more, you won’t gain anything.

For example if the signal range you are interested in is 0 to 1000 electrons per pixel then you can use a gain of 1 ADU per e- and get an image range of (say) 20 to 1020 ADUs. If you increase the gain to 4 ADU per e- your image range is now 80 to 4080 BUT the values will be 0, 4, 8, 12, 16 instead of 0, 1, 2, 3, 4. The image may look brighter but there’s no more information in it.

Where the dynamic range thing comes in is that the CMOS sensor will often have a higher range for each pixel than the ADC range. For example the pixels may have a well depth of 25,000 e- but the ADC may only have a range of 0 to 4096 for a 12 bit ADC. With a gain of 1 ADU per e- you can only get the lowest 4096 electrons. With a gain of 0.5 ADU per e- you get up to 8192, with 0.25 ADU per e- 16384. You need the gain to be reduced to 0.16 before you get the full range of the pixels.

CCD cameras don’t have this problem because they normally have a 16 bit ADC giving 65536 levels and typical CCD ICs have a well depth of less than this so the entire range of the CCD is covered by the ADC.


Hi Chris. I must be misunderstanding something. 1 e- per ADU is unity gain. Are you saying there’s no point to increasing gain beyond unity gain?


@joelshort Yes, that is Chris is saying.
@dts350z As Chris points out, increasing the gain does not increase the information. The brighter image is an illusion of more data. You can do the short exposures at unity gain to the same effect.


That’s it. The problem with CMOS sensors is that the bit depth of the ADC means that you need to reduce the gain from unity to utilise the full range of the sensor.

How the gain numbers that the camera uses relate to the gain in ADU per electron is anyone’s guess.

To add to the fun there is noise, in the signal, the unwanted signal, such as light pollution, and read noise, maybe others. These reduce the signal to noise ratio further making high gains less essential.


Chris, there is value in increasing “gain” beyond unity. This has been a long term debate over many years, but the reason to use a higher gain has to do with quantization error. Unity gain is FAR from being free of quantization error. The error definitely exists, and can be obvious in a noise plot with low read noise cameras.

Now, with an older camera that has higher read noise, say several electrons worth, the quantization error is generally going to be swamped by the read noise itself. So you won’t notice it, and it does not really matter.

I believe the situation changes when you have under 2e- read noise, as even at unity gain, quantization noise is ~0.3e-, and the error itself is actually quite large, and can be visible in shallow signals. Consider this:

The quantization ERROR is quite obvious at 1e-/ADU, where read noise is only around 1.6-1.8e-. You can see the stepped and chunky nature of the noise in the crop to the right as well. There are only 10 discrete values that each pixel can snap to in that entire crop…and yes, this is at unity! At 0.25e-/ADU the total noise is lower (it is only about 1.2e-), and quantization error is nowhere to be seen. The noise profile in the crop to the right is much more natural, gaussian, clean and random. There are thousands of possible values for the pixels at the high gain. If your goal is to chase FAINT signals, a higher gain is useful. This is particularly true with narrow band imaging, where you may have areas of the frame that have little to no signal.

Another point here is, it takes very little to swamp the noise, including any quantization error (which is already vanishingly small) at 1.2e-. A 5e- signal will do it! With the higher quanization error at unity gain, you would not only need more exposure to deal with the read noise itself, but the ERROR is clearly a non-trivial component of the noise at unity, and you will need enough signal (and thus shot noise) to eliminate the impact of quantization error, which can require additional exposure beyond what may just be necessary to swamp the underlying read noise itself. So, even though unity gain has more dynamic range…you often end up needing to expose more anyway to fully bury the quantization error, which eats into that DR more…

On the other hand, a camera with say 8e- read noise would need a 256e- signal to swamp the read noise. Even if you had higher quantization error with 8e- read noise, by the time you have 250e- of signal, the quantization error is not going to matter. The ultra low noise with newer cameras (and this does not only apply to CMOS, there are some CCDs like the Sony ICX834 that have under 2e- read noise as well) is what changes things here.

There are also real-world practical factors to consider. You can and often will clip stars a bit more at a higher gain, however it is usually not much more, and in practice it does not matter even if you clip a little. I generally am ok with clipping th centroids of the brightest stars. Once I stretch, you can’t tell:

What you can see, though, is the improvement in the background noise profile at the higher gain. There is more quantization error in the low gain image above, despite the fact that the exposures 4x longer. That, IMO, is worth it (especially with CMOS cameras, which often have more FPN and random or semi-random patterns at lower gains.) Here are larger crops that better demonstrate the difference in noise characteristic:

Gain 0 5x12m:

Gain 200 20x3m:

It is not a huge difference in noise profile, but I do prefer the high gain version here. And, as you can see, there is no visible difference in stars here. The shorter exposures largely compensate for the loss of DR. Here are some unstretched crops of individual subs from the above to demonstrate (click the images to see full, the form is chopping off the right side a bit):

Gain 0 12m:

Gain 200 3m:

To be clear here…at Gain 200…only two pixels are clipped. One with several pixels in the center, one only the very central pixel clipped. The rest of the stars, while they are brighter and did become visible, are NOT clipped. The “cost” of lower DR at the higher gain is minimal, and effectively a non-issue in practice, as once you stretch you can’t tell the difference anyway.

If you are chasing faint details, high gains with low quantization error (and also even lower read noise) are useful. Personally, with deeper integrations and much deeper stretching, I much prefer the cleaner and more random background sky noise profile I get with higher gain narrow band imaging, than I do at lower gains. With deep stacking and stretching, lower gains sometimes exhibit more issues from FPN (i.e. banding), which are just not present at higher gains. Unity on the ASI1600 is actually pretty good, but deep integrations will still often reveal some banding. I never have that issue at Gain 200, though (which BTW is just about 2x unity gain, or ~0.5e-/ADU).

It should also be called out clearly here, once you are around 0.5e-/ADU, the bit depth of the camera just doesn’t matter. Lots of CCD cameras have gains ranging from 0.3-0.6e-/ADU. Once you are sampling each electron by about the same, then bit depth is a total non-issue. There IS a loss of DR on lower bit depth cameras, but it usually doesn’t matter in practice, not once the data is stretched.


Thanks for this explanation John. It has caused something to “click” in my brain about something that I have observed. I have been operating my QHY163 at gain 75 for RGB and 200 for narrowband. I chose those figures based on my measurements of the camera’s characteristics and balancing read noise/DR etc.

My normal practice is to use 3min subs for RGB and 5min subs for narrowband. It has often surprised me how clean the narrowband images come out vs. the RGB images, and this has helped me understand why.


Are you seriously claiming that those images use a gain of 200
per electron?



I think your solution of using the same offset is actually ok, at least for the CMOS cameras. You set your offset based on the longest duration of your images. The difference in offset is minimal….

As for gain, I assume we are talking about CMOS cameras, increasing the gain on CCD cameras is a bad idea. I would actually recommend changing the exposure length instead of gain. However I see where you may want to increase the gain to reduce exposure time. In the end it is your choice, that is the beauty of CMOS cameras….

Bruce Morrell




These are ZWO ASI cameras. ASI cameras have a gain range in their drivers from 0-600…this is the setting value in the driver, which represents 0-60dB. Most of their cameras have an analog gain range up to 24-30dB, which means for the most part, viable gain settings range from 0-300 “gain”.

In the case of the ASI1600 and ASI183 which were used to create the information from my previous post, the maximum gains are 300 (30dB) and 270 (27dB) respectively. I usually use a gain setting of 200 for the ASI1600, which is 20dB of gain. Unity gain on the same camera is 139, or 13.9dB.

In terms of “gain per electron”, I’ve measured the A/D conversion ratio, or “gain” as we commonly refer to it, at 0.48e-/ADU. So, it is roughly 2x analog gain per electron…each electron represents a bit over twice the voltage, and thus twice as many ADU.


That is very interesting! :stuck_out_tongue: When I first got the ASI1600 in 2016, I did some testing. I had originally guessed that I would find “half unity” at around gain 65-70, and ended up finding it, through basic testing, at gain 76 (7.6dB, or 2e-/ADU). I then guessed that gain 200 would be about “twice unity” or 0.5e-/ADU, and tested and when it came out to 0.48e-/ADU I just went with it.

I thought that the QHY had a slightly different gain range, but, maybe their settings also represent decibels? I guess if that is the case, then it is not surprising that the same settings on both cameras would produce the same results. Good stuff!

When it comes to narrow band, though, I definitely like the results I get at higher gains. I would consider unity about the lowest I would go for narrow band on the ASI1600. I’ve actually done some narrow band imaging at Gain 76, which is 2e-/ADU. I ended up using 10 minute subs, and was not quite swamping the read noise the same as I do at Gain 200. The final image was pretty good…but, I think if I were to revisit the object in the future, I would use the higher gain, as this image took over 30 hours of total integration:

The background noise profile is not the greatest. At the time, I was quite ecstatic, but once I got more familiar with the camera, I started noticing little issues with the background signal and faint structures that I think could have been avoided at a higher gain. In this case, I think even unity with 5 minute subs would have been a little better.

Joel, FYI, you sent me a copy of SGP to test the offset setting not too long ago. I’ve been slammed with work lately, and have not had a chance to dig into it. I apologize for the delay. I have an ASI1600MM-Cool v1, and I definitely want to try it out and see how it works. I have had a couple people mention that they had some trouble getting the native ASI camera support working, so I’m eager to see how it goes. Anyway, wanted to let you know…I have not forgotten…I just haven’t had time yet.


These would indeed be ZWO CMOS cameras. With CMOS, I often think of gain as a way to optimize the camera with the scope, as well as optimize the noise profile and characteristic. At higher f-ratios, you can quite easily increase gain, and still use longer exposures. Think of it as akin to binning with a CCD. With the CCD, you trade off spatial resolution for SNR…with a CMOS sensor, you trade of DR for SNR. In either case, a tradeoff is involved.

Binning is not really a viable option with CMOS sensors as of yet, since binning is usually done digitally (even if it is done on the hardware). There are some CMOS designs that incorporate CCD backing memories for each pixel, which are used for global shutter, and some of those designs have extended that to support 2x2 hardware charge binning as well (the whole combined charge is converted to a voltage at the 2x2 groups of pixels, and the 4-shared readout logic is already capable of applying the whole voltage down the column to the ADC). So perhaps someday, we will also have binning with CMOS cameras.


The gain range on the QHY163M is 0-580. I am not sure how that translates into decibels. Using PI’s BasicCCDParameters script I came up with the following camera measurements:


I concur with your assessment of the CMOS camera, and the ability to optimize it according to your system. That is their advantage… Bruce

Bruce Morrell




I see no advantage to ever running this camera above zero gain. Any improvement in noise at higher gains is an illusion created by the fact that the camera is throwing away the lower bits where the noise exists. The best quality images are going to be had with the highest dynamic range (eg zero gain). Control noise by taking more exposures or post processing.

@jon.rista Your examples of noise would be more illustrative if the comparisons where done with the same number of exposures. Naturally, the sample with more images is going to have lower noise. Do 20 exposures at 12 minutes (gain zero) to compare with 20 at 3 minutes (gain 200).

There is no free lunch to be had by running this camera at higher gains.


I believe he was trying to illustrate that the same overall integration time (60 minutes) results in the same overall quality of image. 5 minute exposures are generally considered “less risky” so shorter exposures are generally preferred if the result is the same as less, longer exposures. Also, since CMOS has really lowered the entry cost a lot of the imagers using CMOS are likely using mid tier mounts that may have issues at longer exposures.

I’m no CMOS imager…I want to be…but it seems that the reduction in price is met with an increase in learning curve.



I don’t think I ever said there was a free lunch. I was rather explicitly comparing equivalent images. The benefits I was calling out were primarily aesthetic (and I think aesthetic factors are pretty important for pretty picture images).

If I did 20x12m, it would have significantly more signal than 20x3m. That would be a very imbalanced comparison. The idea was to show how two integrations at different gains of the same total exposure time (and thus same signal) would compare. The JPEG compression ruins the comparison a bit, but IMO the noise profile of the 20x3m subs is much cleaner. If you go for a given total integration, then no matter the case shorter subs at higher gain will always end up with more total subs stacked. That can be a pro or a con, depending on exactly how much integration you think you need in the long run, but for most things I image, I end up with 150-250 subs at higher gain, with several hours or so total integration, and that’s just how I like it.

A more apt comparison might be to use 6 minute subs at both gains, and stack the same amount. In that case, I do believe the higher gain would produce the cleaner background result, albeit with some clipping in the brighter stars, whereas the lower gain would have higher background noise, but likely zero clipped stars.

In addition, as Jared noted, there are benefits to shorter subs. For one, not everyone is able to get 12 minute subs, especially with lower end mounts and external guide scopes. But there is also the potential for resolution gains. With shorter subs, you can be more aggressive about culling less than ideal subs, stacking only the best, which can improve the sharpness of the final integration.


The discussions about shorter exposures and guiding are certainly true, however, this has nothing to do with gain. You can do shorter exposures at zero gain for all the reasons discussed and have better data depth. The lower noise at higher gains is because the camera is throwing away data. The physics of the camera are indisputable. Higher gains reduce the data depth as well as the noise. Increasing the gain is not increasing the basic sensitivity of the CMOS - it is changing the amplification and analog to digital conversion after the sensor.

You can get the same effect of higher gain by appropriate post processing. The difference is that higher gain is discarding the low bits while with post processing you can control the noise profile. There is a choice to be made of many short exposures vs fewer longer exposures. No matter which you chose, setting the gain higher should not be part of the equation.

In the end, it is your choice of gain. My original point was to make people aware of the tradeoffs with higher gain. It is easily misunderstood. Setting the gain higher seems like a no brainer but it is not that simple.


Sorry, but I cannot fully agree. The fact that a higher gain won’t let you capture more photons is actually indisputable. The physics of the camera is also indisputable and the sensitivity has nothing to do with gain, but it is basically a synonym of quantum efficiency. Nonetheless, in my opinion @DesertSky is not mentioning an important point: only part of the readout noise is generated upstream of the amplifier, the remaining part is generated downstream.

You can’t get rid of upstream noise but, with a suitable choice of (high) gain, the downstream noise can become negligible, unfortunately at the expense of dynamic range. That’s the basic reason why readout noise (if expressed in electrons) decreases when gain is increased. That’s also why DSLR photographers use a high ISO (read: Gain). As most Canon owners know, banding can be visible at ISO 100, but it generally disappears at higher ISO speeds (because it’s mainly a downstream noise). The camera model which @DesertSky refers to, seems to me an idealized one, and it reminds me the concept of “ISO-less DSLR”: if downstream noise is zero, then there’s no difference in selecting a higher gain/ISO speed or boosting the luminance of the image in post processing.

I have to agree with @jon.rista one more time and on an additional topic, as quantization error CAN BE an issue with CMOS cameras if you don’t select an appropriate gain level. For the QHY163/ASI1600 the full well capacity is approximately 20,000 electrons, which are far more than what can be handled by a 12-bit ADC. Use zero gain and you still won’t get all the data collected by the sensor. On the other hand there can be a slight advantage in working at a gain below unity and in this regard I totally agree with @jon.rista’s post above, which fully meets my line of reasoning, my understanding and my experience with CMOS cameras (DSLR and dedicated astronomic cameras).

Finally, according to my measurements, for the QHY163M unity gain corresponds to an ASCOM gain of 120 (as far as I know, that’s more or less the same value of the ASI1600). With this gain level the dynamic range is still 11.6 stops, just a bit lower than the 12 stops you can get with a 12-bit ADC. However, readout noise is just 1.5 electrons versus the 3.5 electrons at gain 0.

Then, besides the aspects that put a practical limit to the duration of the single exposure, one should also consider that there are situations where the dynamic range provided by the camera is not that critical, just because signal is very faint. Most of my pictures are made with narrow band filters and I generally use a gain of 180 with 10-minutes exposures. In 99% of the subs I don’t get a single saturated pixel in my subs. So, at least in Ha exposures, you can usefully tradeoff dynamic range (which is hardly needed) for lower readout noise (which allows you to be sky-limited with reasonably long exposure and narrow band filters), by selecting high gain values.

LRGB exposure in light polluted skys can actually benefit from low gain values, but that’s another story. Please correct me if I’m wrong.


I don’t disagree with anything you have said, but I also don’t recall saying that high gain made the camera more sensitive. I think you have misunderstood what my post was about.

Low gain on the ASI1600, as well as most CMOS cameras that have 12-bit ADC units, is actually far from ideal. At low gains, your quantization step is often 3, 4, 5 electrons per ADU!! That is NOT useful for faint signals. By that, I do not mean a low gain is less sensitive. The issue is not that you are not detecting the faint signal…the issue is that you are actually still throwing away useful information, because you are crushing many electrons of information together into discrete output units. You could use longer exposures…but they generally have to be much longer, especially with narrow band, in order to fully swamp that quantization and finely separate fainter details.

If you actually try to do this, you will find that even after hours of long exposures, the overall quality of the noise profile is not that good. In fact, it is often really ugly, once you get into stacking enough individual sub frames to overcome the quantization error. This is because FPN, which tends to be worse at lower gains (empirical observation), ends up getting reinforced and can become quite strong in a deep stack. You can dither, but in my experience, even with aggressive dithering, the overall quality of the noise profile with lower gain on 12-bit CMOS cameras is not that great unless you are able to deeply swamp the noise (i.e. beyond 10x.) Patterns in the noise are not so easy to deal with in post processing either, and in fact they can sometimes be quite problematic…so I don’t think noise reduction is a solution here. With LRGB imaging, you can often swamp the read noise by 20, 30, 50 times, and most of these problems are not an issue. But with narrow band, the overall quality of a final integration at low gain is never as good as at a high gain.

Another critical point here. Dynamic range is ultimately clipped by the ADC units. Technically speaking, a camera like the ASI1600 has ~12.5 stops of DR in analog terms, but once the data is digitized, it is limited to 12 stops. There can be some wiggle room here, I think, in a practical sense…if you NEED longer exposures, then the FWC is ultimately what determines how long you CAN expose for. But, you actually are limited to 12 stops of output DR at every gain setting from 0 to 76 on the ASI1600. You get 12 stops of DR at Gain 76, and only 2e- read noise, with a conversion ratio of 2e-/ADU…so lower quantization error. If you prefer to use long exposures, I would offer that Gain 76 is ultimately the “ideal” setting, as it gives you the best balance of read noise and FWC, at the maximum DR the ADCs can output.

Now, I do not disagree with you that at a higher gain, you are throwing away dynamic range. You are, for sure. However, my question is…does the loss of that dynamic range matter in the end? Re-read my first post in this thread, and examine the example images closely. I clipped zero stars at the lower gain, and only two stars, barely, at the much higher gain. Yes, I clipped a couple stars…but it just did not matter in the end. The only observable difference in the stretched examples was the background noise profile…there is no obvious difference in the stars once the data is stretched.

This has been my experience with CMOS imaging since I first got into it with the ASI1600 back in April 2016… I’ve been imaging at Gain 200 (20dB gain) on that camera for a couple of years now, and the loss of DR has really never mattered. Sometimes I clip a few stars a bit more than I would like…but, I think this is kind of always the case with bright stars, even with CCDs, DSLRs, etc. And if the loss of DR doesn’t matter, then IMHO, it is always better to use a higher gain than a lower, so much as you can, so long as you aren’t clipping more than you personally can handle. I think that the impact of clipping at higher gains is often overstated, and not as serious a problem once you consider what happens when you stretch the data…at least with narrow band imaging.

For LRGB imaging, I generally recommend lower gains as well. I usually use Gain 76 on an ASI1600 myself, and if the skies are particularly bright, I may even use Gain 0 (so long as I can swamp the noise by ~50x or so).


The only camera I was discussing was the ASI1600 and if you look at these graphs:
The full well is 20k at zero gain and 5e/ADU which is 4000 ADU max - 12 bits can handle that. So you can “get all the data collected by the sensor”. Your discussion about not being near saturation is problematic. To utilize the full dynamic range of the camera the exposure should ideally be close to saturation. You under estimate the importance of dynamic range. When you stretch an image you are taking a very narrow histogram and expanding it a lot so those low bits really do become important because the stretch is non-linear and the low bits are spread over a large range.

Again looking at the graphs, one could make that case that a gain of about 50 might be a good dynamic range /noise trade off because it is the steep part of the noise curve and you are losing only 1/2 an f stop of dynamic range. Beyond that point there is almost a linear tradeoff of dynamic range and noise. Certainly any gain above 100 is not justified based on the graphs.

I just went back and looked at some recent 15 minute exposures with a green filter and I could find a few isolated points that were getting close to saturation. I used a zero gain so I was getting close to the full dynamic range. I stacked ten images and was able to control the noise quite well. Granted 15 minute exposures are painfully long and the risk of a bad exposures is high but my resulting image got some good complements from a recent astronomy club meeting. Based on this discussion, I’m thinking next time I will try a gain of 50 with 8 minute exposures for RGB and 10 minutes for narrow band. That might be a reasonable dynamic range / noise / exposure time trade off.

@jon.rista Dynamic range does mater because of stretching as discussed above. As you say a few clipped stars won’t matter. Where you will see the difference is the mid and lower tones. I come to astrophotography with a background in landscape digital photography and the rule of thumb there is to get as close to saturation as possible with as much dynamic range as possible. The subtleties of the mid and lower tones is probably more noticeable with landscapes but nevertheless the same rules apply to astrophotography. Your choice of 76 gain is interesting as it splits the difference between 50 and 100 mentioned above. I see 12 bits being closer to 50 gain but I won’t argue the point.