Help for ideal exposure time

Hi,
i need help for ideal exposure time.
My telescope is a Vixen ED80Sf 80/600 and i use a reducer-flatener 0.85
My camera is the ASI 1600 MM-C gain 139, offset 21,
The sky is Bortle 5-6.
SGP gives me ideal exposure time 0 sec. What is wrong ?
Thank you.

Aren’t those statistics from the focusing image shown in your screen?

iPhone

Yes, here are the statistics of a normal image:

I think these formulas were derived for CCD cameras and not CMOS. It’s been my experience that trying to calculate an ideal exposure time for this camera results in values less that 1 second which is clearly not going to cut it.

Why are you using a gain of 139? That is the value for one ADU per e but with a reduced dynamic range. This camera can achieve a 12 stop dynamic range but only if the gain is in the 60 range or lower. There was a very long thread on this matter in the June July time frame. If you want fine detail in your images you need to use as high a dynamic range as possible.

Another way to determine exposure time is to try to get the brightest parts of your image just below saturation but that may be longer than is desirable for other reasons.

1 Like

Thank you DesertSky. Maybe is better if i use Gain 75 and Offset 15, I find in Asrobin the best {?} settings for my targets from other members. for this camera. In my last photo {now} with the Ha filter the ideal exposure time is 0.2 min. It’s good i think.

That sounds like a very short exposure. What is the value of the brightest part of this image?

It is 65504. You are right, 0.2 min = 12 s. Very short. The image is Ha 5 min.

So the image you uploaded is 5 minutes? With that value for the brightest spot you are utilizing the full dynamic range of the image. So are you proposing to go to 12 seconds? That would not give you a good dynamic range.

explanation of ideal sub exposure length on asi 1600 mm:

1 Like

I quote from this reference “The value from the table is the shortest sub length that you can use”. He further states “you can use longer subs with no loss of signal to noise ratio”. So this table is by no means an “ideal” exposure value. As he correctly states, higher gains result in lower dynamic range.

For the best quality images, I suggest a gain around 50-60 for maximum dynamic range, and, a long enough exposure to get the bright parts of the image just below saturation. This will give you the best image for your processing software to work with. When you convert from a linear to a non-linear image, both these factors will give you a leg up.

2 Likes

Above all, any image setting will work eventually - what we’re after here is increasing the efficiency of the images we take. I’m going to assume there’s no issue in tracking/polar alignment here (which may impose an upper limit)

A few things to consider:

  1. For best efficiency, you need to expose such that the background sky noise is >> the read noise from the camera (something like 10-20 times - you want to go for about ). After this point, there is little difference in exposing each sub frame for longer, or taking more subframes - the total exposure is the limiting factor.

  2. The low Dynamic Range argument only applies if you’re not stacking many images - if you stack up enough short, low range images then you will recover the dynamic range you need with little issues around posterization at low signal. For NB imaging, if you find the exposure from point 1 is very long (to overcome the higher read noise at lower gain), then running at high gain is a good approach. It does even work at very short, high gain on broadband, even if it’s not the most efficient way of doing things - see Emil Kraaikamp’s work with thousands of 1sec exposures (watch out for data storage/processing time constraints!).

  3. The image shown contains Gamma Cas (which is bright!). For the required exposure time for point 1), you may well saturate the star core. You have two options: first is to reduce the exposure such that it doesn’t saturate and take more images - you might need a longer total exposure to get the same result for the faint stuff. The second, is to take a series of shorter exposures, and then do some kind of HDR composition to cope with the brighter stuff - PI has routines for this.

The link above to the cloudy nights thread is very good. For NB exposures with low gain, the optimum under non-moonlit skies is often impractical (and hence the suggestion to go with higher gains to try to mitigate the read-noise contribution) - you just have to choose a reasonable value for your mount and number of images you’ll be stacking, and bearing in mind the subject matter (bright clusters might need lower gain/shorter exposures or some kind of HDR process).

1 Like

This is not true. Stacking does not recover dynamic range - it just reduces noise. There was a very long thread on this in July. It ended with a reference that contained a proof that doing median (stacking) does not add accuracy (dynamic range). You have to have the dynamic range in the individual images. You can’t recreate low bits by doing an average of high bits.

1 Like

Here are the references from that old thread:

  1. http://www.batesville.k12.in.us/physics/APPhyNet/Measurement/Significant_Digits.html which says
    “You can’t improve the precision of an experiment by doing arithmetic with its measurements.”
  2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4679338/ which offers analytical proof for the statement
    “Therefore, the mean value cannot be reported with a precision higher than that used in the measurement of the raw data.”

I’m not reopening that discussion, but I agree, median combines do not - it takes the middle value of the stack (ref: Image Processing Stacking Methods Compared, Clarkvision.com).

However, there were plenty of others in the thread that pointed out that by summing (or taking the mean - same effect) of all values for a given pixel, very low signals could be extracted from the floor, thus increasing the number of representable values - yes it decreases the noise, but this process allows more levels to be displayed thus increasing the effective DR in the output image. The example of a noisy one-bit camera gives a good simplified example of how addition/mean combinations do work.

(BTW: The “proof” in the second link appears flawed to me: it appears as they are adding measured values together, they don’t add the uncertainties in quadrature, which they would if uncorrelated and random. This is the whole basis of making repeated scientific measurements to improve Signal to noise and accuracy of a measurement…).

I’m out.

Agreed, however the accuracy can not be improved beyond that of the original data. The improvement in accuracy is asymptotic to the original accuracy. The dynamic range of the original images does matter. You can’t increase the gain thereby reducing dynamic range and take lots of images to make up for the loss of dynamic range.

You also can’t get away from low gain requiring more photons before an ADU ticks over to the next value. When you lower the gain to get greater dynamic range all you are really doing is changing the internal amplifier gain and this has the effect of increasing the number of photons needed to fill the bucket. Many of the faint parts of an image receive less than one photon per second. With a gain that results in 1ADU per photon every photon counts. With a gain that is equivalent to 2 photons per ADU you only get the midway point where the information shows one photon was received after you integrate out all of the noise and average the subframes. So, you are using the noise to generate some of your averaged result. Without noise in your image this intermediate value would not exist. I do agree that a lower gain will give you stars that are not over exposed, but you can’t get this AND have highly refined data at the low end of brightness. Every gain setting is a trade off between saturation and quality. If you have to avoid saturation then the settings or exposure will mean you lose quality that can be partially recovered through integration.

Not sure I understand your point. I advocate that high dynamic range is very important. If you use a high gain to try to distinguish a single photon as you suggest, then you are compressing your image into fewer bits which results in loss of detail at the low end (assuming you don’t saturate the top end). Trying to chase a single photon that way is not going to improve your image detail - it’s going to make it worse.

All other things being equal, I’m advocating high dynamic range (e.g. low gain) with sufficient exposure to have the brightest part of the image being just below saturation. It’s not always possible to get to that ideal especially with narrow band filters and dim targets. Also, amp glow and tracking errors can preclude very long exposures. As you suggest, there are many tradeoffs that need to be considered. Some people advocate over exposing the top end but that will compresses the detail around the margins of bright objects. In the end, use whatever works for your equipment. However, I want people to understand that increased gain comes at a the cost of loss of detail.

I will try to explain. I use a ZWO ASI1600MMC and at a gain of 139 the manufacturer states that the gain equates to one ADU per photon. That is, if I have received 100 photons the binary count for that pixel will be 100. When it receives the next photon the count will go to 101. I can receive a total of 4095 photons before the count is maxed out - the bucket is full. In a typical exposure we aim for about an average of around 900. If the exposure is 100 seconds then we are receiving 9 photons per second into that pixel. If I reduce the gain to 50 the manufacturer says the chip requires 3 photons (taking this from memory, sorry if it is a bit off) per ADU. So, if part way through the exposure a pixel has accumulated 90 photons and we were to read the chip, the reading would be 30. The next photon coming in would still give a reading of 30. Even the next photon arriving before you read it, less some effect from noise could leave you with a reading of 30 still. That means the only way you are going to read that extra detail is by having the charge accumulate on the imager and add with some noise enough to tip it over to a count of 31 so that over your subs you will average out at a value close to the real signal. This is still full of errors and digitizes the image into bigger steps of brightness at the benefit of having a well that can now accept many more photons before it is full.
I want people to understand that decreased gain comes at the cost of loss of detail. Overall it is a little like the focus “V” curve. Bad at both extremes and good in the middle. The sweet spot is where each photon is counted. On the ASI1600 this is at a gain of 139.

However, at a gain of 139 you can only count up to about 11 bits (2048) worth of photons. At a gain of 50-60 you can count up to 12 bits (4096). So while you are able to distinguish individual photons, you can’t count as many photons which reduces the low level details. You are thinking that counting individual photons is increasing accuracy but that is not the case because of the reduced max count (dynamic range).

Suppose at a gain of 139 you achieve near saturation in the bright parts with a 1 minute exposure. To get the same result at a gain around 50 we would need an exposure of about 3 minutes. With the 139 gain the bright areas would be about 2048 ADU while it would be 4096 ADU for the 50 gain. Now consider a part of the image that is 1/2048 of the brightest part of the image. At a gain of 139 we would record 1 ADU while at the 50 gain we would record 2 ADU. At the 50 gain 1 ADU could record a dim area at 1/4096 of the bright area. The 139 gain can not record that low a level. So you can see that the higher gain does not increase the accuracy despite the ability to see individual photons. This discussion assumes that you can double the exposure without considering other hardware effects such as guiding or amp glow. This analysis is to illustrate the effects of dynamic range which needs to be balanced against other real world considerations. But nevertheless, the dynamic range can not be increased beyond the initial choice by stacking more exposures.

I think I understand the confusion. The imager in the camera collects the same number of photons and converts them to charge regardless of the gain. The gain comes after the image collection and is only used in how the chip converts the charge stored on each photo receptor to a digital file.
image
This image should hopefully explain what I mean. The image cell is one of the 16 mega pixels. If we ignore quantum efficiency, each time a photon hits the pixel it is absorbed and an electron is released. This electron is trapped on one plate of a capacitor and has enough charge give a voltage of just over 0.244mV (assuming ZWO are using the 5V version of the chip). Once we have absorbed 4095 photons we will have 4095 x 0.244mV = 1V in the cell. Because a gain of 139 corresponds to a voltage gain of 5 on the variable gain amplifier the output of the amplifier is now 5V and it will get converted into all bits ON in the converter. A full 12 bits all on is a count of 4095, so that is the maximum we can read.
If the gain was 50, then we still get 0.244mV per photon that generates the electron, but the amplifier voltage gain is now 1.778. This means we can collect three times as many photons before we get 2.785V sitting on the image cell. You will now have 11,410 electrons generated and the voltage this gives you is then amplified by 1.778 times to give 5V again. This is converted to, your guessed it, a reading of 4095, but it took many more photons to get there. That is, there is a brighter source, or a longer exposure that doesn’t saturate the image chip. You can keep going until with a gain setting of 0 the image cell will hold 20,290 electrons before it is full.
However, each photon leads to an electron that in turn leads to a small voltage. in the gain=139 case each increment of 0.244mV leads to a count in the reading we see for that pixel. In the case of gain=50 you have to see three more electrons before this count goes up. Therefore you have traded off dynamic range for detail. The image below shows four pixels side by side with a gain of 50 where the first pixel receives 100 electrons, the second 101 electrons, the third 102 electrons and the last is 103 electrons and my representation of what the stretched image would look like.
image
The first three pixels will give you the same shade because the analog to digital converter hasn’t increased in value until the voltage is high enough to do so as a direct result of the low gain in the amplifier
Now if we do the same with a gain of 139 where we are getting a different digital reading for every electron we would see this stretched image for each adjacent pixel
image
The number of photons received is exactly the same, but we get a different shade for each pixel because we have a different reading for each pixel in the final image. In the case of gain=50 this detail can never be recovered no mater how much processing you do.

The point I am making is that if you want to see fine detail and slight changes in brightness you need to have a gain of 139. More or less than this is sub optimum. If you want to see unsaturated stars and have fine detail then do this through exposure, not gain.

Please note that the above discussion ignores quantum efficiency. This is just meaning that there is only a percentage chance that the photon will be captured by the image cell and this probability depends on the energy (colour) of the photon. I also don’t describe the higher gain case, but if you look in the forum you will see this under a different thread.