>999 images

I almost always have >999 images per target, living in suburbia under light polluted skies. It would be great if SGP allowed me to specify over 1000 total lights. As it is, I have to reset the sequence as 999 approaches, and that messes up the continuity in time for subsequent processing.

Thanks,
Gerrit

I suppose this means your are using very short exposures - like a few seconds? At the risk of starting a big fight, using short exposures is not helping with light polluted skies. I suggest increasing your exposure time until the brightest parts of the image are just below saturation. Then you will be fewer exposures and get better results.

Seems to me this is already supported if you put multiple entries in for each filter. That way you can have 999 for each entry. I usually make multiple entries per filter just to minimize the frequency of filter changes.
Example:
L t5
L t5
L t5
R t5
R t5
R t5
etc.

I’m using an OSC (no filters) with 60 second exposures. I need lots of integration time to overcome the skyfog noise. 1000 subs is 16.7 hr raw subs, which is not unusual. Some of us poor light-polluted souls integrate to 20, 40, even 100 hr.

@jmacon – I’m not sure I could use multiple events per target and get consecutive file numbering, but I’ll look into it. That might work.

Lots of stacked images reduces noise but not light pollution. You would get better dynamic range with longer exposures (near saturation).

Longer exposures for me reduces dynamic range, since the histogram hump gets pushed up well off the left side. 60 s gives me the best dynamic range I can get, and even at that there are many stars which saturate. Longer total integration time reduces the noise from light pollution, and what is left behind is a gradient representing the effect of light pollution (plus moonlight, etc.). The gradient can be removed in signal processing.

I think you mean the histogram moves to the right with longer exposures? In any case that is what you want is to get the histogram as far to the right as you can without clipping. Much of the detail (e.g. dynamic range) will be in the trailing tail to the left of the peak.

I understand that the histogram’s peak represent the sky background and all the detail will be to the right of that peak.

7680048756604F949481BE6F37C259CF.png

Unless you have very strong light pollution, the peak should represent the bright part your target image. Most of the detail will be to the left of the peak in the tail. If your peak of the histogram is really from light pollution, I suggest finding another location because the image is being overwhelmed by the background.
The right of the peak is important but is only the top few bits of the image. Avoid clipping the right (bright) part of the image.

Yes, that’s what I meant by pushing the hump off the left side: it moves to the right.

Stars will clip with any exposure over about 3 s, so you’ll have clipping almost regardless of where you put the hump. Putting the hump just off the left side separates it from read noise and gives you the best dynamic range you can get with your sky noise level.

This doesn’t happen. Image signal can be fainter, stronger, or superimposed on the sky noise represented by the hump, and it will add linearly with time while the noise adds as the square root. The image will build faster with integration than the noise, and will “come up” out of the noise, regardless of where the hump is.

Check out the awesome images people are capturing from high Bortle skies. I’d call myself a novice, but this is my latest, with 18 hr integration time (and 60 s subs) from my Bortle 7 back yard. It just takes a long time to overcome all the skyfog noise, and extra processing to flatten out the wonky gradient which the light pollution produces.

It is true that stacking reduces random noise by the square root of the number of images, however, light pollution is not random noise so stacking does not reduce it. Sky noise as you call it is not the same as light pollution. Light pollution is a relative constant value being added to your image and not noise. Think of light pollution as someone shining a dim flashlight at your scope. It’s not random noise; it’s just extra light being added to the desired image and stacking does not remove it.

There is a trade off between dynamic range and noise. Lots of short exposures reduces the noise at the expense of dynamic range. The trick is to get the trade off in balance. It does no good to reduce the noise below the dynamic range threshold. Conversely, having great dynamic range with lots of noise is not great either. However, some noise can be controlled in post processing allowing one to extract low level signals if the dynamic range is high enough.

Short exposures are likely clipping your signal on the low end (signal and light pollution) You might have better results by using longer exposures and controlling the light pollution with sophisticated masking and/or curves during post processing. You can get respectable images by clipping the low end with short exposures but if you want great images you need that low level data.

Using my Hyperstar which is something like f2, I routinely expose for 5 or 6 MINUTES so I seriously doubt you are clipping with 3 second exposures. If the histogram hump is to the far right with no visible right side clipping then that is where you want to be.

This isn’t correct, the physics of image data acquisition means that the light pollution signal will have noise, just as the light from the sky has noise. All the sources of light use the Poisson distribution and the variance is equal to the value - for large signals the std deviation is the square root of the variance.

Given a long enough total exposure time the light pollution noise can be reduced enough that a subtraction of a constant will remove the average pollution value.

Any noise in the light pollution is a very small component of the total. The bulk of the light pollution is not reduced by stacking. Having a better dynamic range gives you more to work with when post processing.

This is wrong. Every photon source your camera picks up has a “signal” and noise component to it. The noise comes from the Poisson distribution of the photon stream, as @Chris points out. The “signal” for LP amounts to an image gradient, and the noise is the square root of the signal. That is a large noise component when your LP is high. Stacking DOES reduce LP noise, leaving the LP gradient as the residual.

Take a look at the pixel values of brighter stars in your raw images and compare them with the maximum A/D output of your camera. I’ve done that with a DSLR and a ZWO ASI294MC and found saturation on the brightest stars even at 3 s exposure.

Looks like you DID start a big fight here, just like you predicted in your first post. :grin: I gotta go now.

Light pollution has a small noise component riding on top of a large mostly steady state signal. In general, lots of short exposures is not an effective way to manage light pollution.

It doesn’t matter how many times you repeat this, you are wrong every time.

I’m not trying to convince you, I’m hoping to persuade others that this is not correct.

@Chris You are not the only person that believes that you can remove light pollution with many exposures but I don’t think the science supports that position. Can you reference some authoritive source that says that light pollution is largely composed of noise so that many exposures removes it?

I refer you to this article (https://www.skyandtelescope.com/astronomy-resources/astrophotography-tips/remove-light-pollution-astro-images/) on removing light pollution which says “we need to expose long enough to get the faintest detail up out of the noise of the camera”. It then describes the usual way to remove light pollution via subtraction. If you google search for removing light pollution from astro images most of the results will be in one of two categories. Using filters at exposure time or post processing to subtract it. Subtraction and filters can be used to remove light pollution because it is relatively constant. If it were random noise, subtraction and filtering would not work.

Next beta will support 9999 images per event.

Thanks,
Jared

1 Like

Thanks a 9999, Jared! :grin:

www.mainsequencesoftware.com