Help for ideal exposure time


#1

Hi,
i need help for ideal exposure time.
My telescope is a Vixen ED80Sf 80/600 and i use a reducer-flatener 0.85
My camera is the ASI 1600 MM-C gain 139, offset 21,
The sky is Bortle 5-6.
SGP gives me ideal exposure time 0 sec. What is wrong ?
Thank you.


#2

Aren’t those statistics from the focusing image shown in your screen?

iPhone


#3

Yes, here are the statistics of a normal image:


#4

I think these formulas were derived for CCD cameras and not CMOS. It’s been my experience that trying to calculate an ideal exposure time for this camera results in values less that 1 second which is clearly not going to cut it.

Why are you using a gain of 139? That is the value for one ADU per e but with a reduced dynamic range. This camera can achieve a 12 stop dynamic range but only if the gain is in the 60 range or lower. There was a very long thread on this matter in the June July time frame. If you want fine detail in your images you need to use as high a dynamic range as possible.

Another way to determine exposure time is to try to get the brightest parts of your image just below saturation but that may be longer than is desirable for other reasons.


#5

Thank you DesertSky. Maybe is better if i use Gain 75 and Offset 15, I find in Asrobin the best {?} settings for my targets from other members. for this camera. In my last photo {now} with the Ha filter the ideal exposure time is 0.2 min. It’s good i think.


#6

That sounds like a very short exposure. What is the value of the brightest part of this image?


#7

It is 65504. You are right, 0.2 min = 12 s. Very short. The image is Ha 5 min.


#8

So the image you uploaded is 5 minutes? With that value for the brightest spot you are utilizing the full dynamic range of the image. So are you proposing to go to 12 seconds? That would not give you a good dynamic range.


#9

explanation of ideal sub exposure length on asi 1600 mm:


#10

I quote from this reference “The value from the table is the shortest sub length that you can use”. He further states “you can use longer subs with no loss of signal to noise ratio”. So this table is by no means an “ideal” exposure value. As he correctly states, higher gains result in lower dynamic range.

For the best quality images, I suggest a gain around 50-60 for maximum dynamic range, and, a long enough exposure to get the bright parts of the image just below saturation. This will give you the best image for your processing software to work with. When you convert from a linear to a non-linear image, both these factors will give you a leg up.


#11

Above all, any image setting will work eventually - what we’re after here is increasing the efficiency of the images we take. I’m going to assume there’s no issue in tracking/polar alignment here (which may impose an upper limit)

A few things to consider:

  1. For best efficiency, you need to expose such that the background sky noise is >> the read noise from the camera (something like 10-20 times - you want to go for about ). After this point, there is little difference in exposing each sub frame for longer, or taking more subframes - the total exposure is the limiting factor.

  2. The low Dynamic Range argument only applies if you’re not stacking many images - if you stack up enough short, low range images then you will recover the dynamic range you need with little issues around posterization at low signal. For NB imaging, if you find the exposure from point 1 is very long (to overcome the higher read noise at lower gain), then running at high gain is a good approach. It does even work at very short, high gain on broadband, even if it’s not the most efficient way of doing things - see Emil Kraaikamp’s work with thousands of 1sec exposures (watch out for data storage/processing time constraints!).

  3. The image shown contains Gamma Cas (which is bright!). For the required exposure time for point 1), you may well saturate the star core. You have two options: first is to reduce the exposure such that it doesn’t saturate and take more images - you might need a longer total exposure to get the same result for the faint stuff. The second, is to take a series of shorter exposures, and then do some kind of HDR composition to cope with the brighter stuff - PI has routines for this.

The link above to the cloudy nights thread is very good. For NB exposures with low gain, the optimum under non-moonlit skies is often impractical (and hence the suggestion to go with higher gains to try to mitigate the read-noise contribution) - you just have to choose a reasonable value for your mount and number of images you’ll be stacking, and bearing in mind the subject matter (bright clusters might need lower gain/shorter exposures or some kind of HDR process).


#12

This is not true. Stacking does not recover dynamic range - it just reduces noise. There was a very long thread on this in July. It ended with a reference that contained a proof that doing median (stacking) does not add accuracy (dynamic range). You have to have the dynamic range in the individual images. You can’t recreate low bits by doing an average of high bits.


#13

Here are the references from that old thread:

  1. http://www.batesville.k12.in.us/physics/APPhyNet/Measurement/Significant_Digits.html which says
    “You can’t improve the precision of an experiment by doing arithmetic with its measurements.”
  2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4679338/ which offers analytical proof for the statement
    “Therefore, the mean value cannot be reported with a precision higher than that used in the measurement of the raw data.”

#14

I’m not reopening that discussion, but I agree, median combines do not - it takes the middle value of the stack (ref: http://www.clarkvision.com/articles/image-stacking-methods/).

However, there were plenty of others in the thread that pointed out that by summing (or taking the mean - same effect) of all values for a given pixel, very low signals could be extracted from the floor, thus increasing the number of representable values - yes it decreases the noise, but this process allows more levels to be displayed thus increasing the effective DR in the output image. The example of a noisy one-bit camera gives a good simplified example of how addition/mean combinations do work.

(BTW: The “proof” in the second link appears flawed to me: it appears as they are adding measured values together, they don’t add the uncertainties in quadrature, which they would if uncorrelated and random. This is the whole basis of making repeated scientific measurements to improve Signal to noise and accuracy of a measurement…).

I’m out.


#15

Agreed, however the accuracy can not be improved beyond that of the original data. The improvement in accuracy is asymptotic to the original accuracy. The dynamic range of the original images does matter. You can’t increase the gain thereby reducing dynamic range and take lots of images to make up for the loss of dynamic range.


www.mainsequencesoftware.com