Question about downloading and dithering


I think I remember a thread that talked about why SGP could not begin the dither and settle process while the image was downloading, but I can’t seem to find it. Does anyone know why the dither command can’t be sent once the image acquisition is complete so the dithering and settling can take place while the image is downloading? I thought my recollection was that ASCOM does not distinguish acquisition from download, but that can’t be right because SGP clearly knows when acquisition is complete and when download begins as indicated by the messages on the status bar.




No we actually don’t know when the download is happening. What you’re seeing on the ui is more of a guess than anything. For instance if the exposure is 30 seconds then we will say “integrating” for 30 seconds and then “downloading” until the camera reports done. But we don’t actually know what is happening, just that the image isn’t ready. So the camera could have taken 20 seconds to actually start the exposure and we are left in the dark.

Maybe we shouldn’t be distinguishing between those phases as its really just a guess.



Thanks for the explanation. Yes - I had asked about this earlier and heard the explanation that ascom doesn’t distinguish the download time. So when Tim pointed out the ui message about downloading in this thread - I was really puzzled.

That’s too bad if ascom doesn’t distinguish the exact time of image completion. That is important and useful info.



Thanks, Jared. That makes a lot more sense.



Not that it’s worth much in a “universal solution”… we don’t typically write sequence code for specific hardware, but we do actually know when SBIG, FLI, Canon and Nikon are downloading (vs Integrating). Anything that goes through ASCOM (most of our users), we cannot distinguish.


ASCOM has a CameraState property that can be used to tell what state the camera is in - Exposing - Download - Idle etc., but it’s optional and not all camera drivers will implement it, or be able to implement it. I expect that SGP waits for ImageReady to be true. SGP could poll CameraState and use that to report what the camera is up to but I wonder if it’s worth it.

The exposure time is available in the LastExposureTime property.



Thanks, Chris. That’s interesting. It may be a good opportunity for a future release to use this property when available. My camera takes over 20-seconds to download. When dithering is thrown into the mix, it often takes more than 30 seconds per exposure to download and then dither. When I’m using Hyperstar with exposures 30 seconds or sometimes less, the majority of my “imaging time” is not spend imaging.



Thanks, Chris for that info. I have an Atik 383L+. Is there a way to tell if it supports CameraState or not - without actually trying to query it from code?

I looked at the ascom docs. and I can’t even tell which properties are optional and which are not. How is that flagged in the ascom documentation? The documentation says it must return an exception if the camerastate is not available. Does that mean it’s optional and perfectly ok not to be supported by the driver?

Supporting this in SGP for concurrent activity during a download would definitely be “worth it” to me. It would probably remove about 15 seconds delay between images - which is particularly important when the exposures are short. You always want to minimize dead time where photons could be collected and instead the entire system is just sitting there idle.

In terms of whether it is worth the effort for SGP to query camerastate - we would need to know how many ascom drivers do support camerastate. Is there a way to know this so that developers can scope the potential payoff to users? If it weren’t optional they all would support it.


[Edit] OK - now I see that required methods and properties have big text in red that says it must be implemented. Optional things do not. So yes this is optional - and if the ascom driver doesn’t support it - you can’t do this even if the camera could do it natively.

So the question remains as to how many ascom drivers support this method. If none or few of them do - then it is definitely not worth it for sgp to include code to handle it.


OK I connected to my atik camera and used code to query the camera state and it did not give exception and it said it was idle - so presumably sgp would be able to know when the exposure is finished.

I don’t know about the other ascom drivers currently in use by sgp customers - but I would certainly value being able to dither during download. As I recall - when I requested this some time ago the response was that it would have been done long ago if it were possible. So - since it appears to be possible - at least for some cameras - please reconsider this as a feature request.



I agree with Frank. If this can be implemented, it would be a huge benefit for me too.



Ken and I discussed this. And while it would be beneficial it would likely result in some pretty bad regressions at the moment. So the risk/reward just isn’t worth it right now. We may look into this in the future or make smaller movements toward making it less of an undertaking.

Currently we dither before every image. This is so that we know we’re taking an light sub rather than a dark, meridian flip, auto focus, etc. We would need to move the dither back to the end of the frame and then make the decision to dither there. It gets more complicated as we’d now be doing this in parallel with the camera in a “not ready” state so events that could normally go (plate solving, AF, etc) would need to check this new state. Also for certain sbig cameras that are guiding through the internal guider we’d have to make special appropriations since the guide chip couldn’t be accessed to perform a dither.

Because we cannot guarantee the consistency of implementation across devices, we will probably not be looking into this in the near future. It would be a fairly significant change to the sequencing engine (effectively undoing all of our effort to move the dither to the beginning of a frame capture so it can be “smarter” in making decisions about whether or not to dither).

In addition to this, it would require (effectively) two different dithering systems to maintain (and risk of regression). One system for cameras that report state properly and the other for cameras that don’t.

If CameraState were part of ASCOM conformance (not even sure if this is possible… just making a point) and we had some ground to stand on when we talk to driver authors and say “Your driver should do this, but it does not, and, as a result, it behaves this way”, then we would consider making the effort, until then we will likely stay with the more universally safe method (at the cost of time). Whether or not ASCOM can enforce this effectively is not for us to say…



CameraState is implemented in the Atik camera driver.

The problem is that ASCOM can’t control what the low level camera driver makes available and if there is nothing that can be used to implement CameraState we are stuck. Better not to implement it than implement it incorrectly.

CameraState should throw the ASCOM.PropertyNotImplementedException if it’s not implemented.

Jared, if you possibly can, please don’t add additional requirements over what ASCOM specifies. It really confuses things when a perfectly conformant ASCOM driver won’t work with SGP.



First off, I truly appreciate the consideration. If you can make progress to implementing the camera states, that would be terrific. If it is too complex, I understand. Regarding the above, and taking into account that I may not fully understand the situation, I’m not sure I see the difficulty with the actions such as plate solving and AF. Both require a complete downloaded image so I’m not sure why they would need to check the new state. If Chris is correct and SGP is currently checking the “ImageReady” state for PS and AF, then that would not need to change since neither function can do anything until it has a ready image.

At any rate, I respect your decision not to pursue this now, but hope that you will keep this in the back of your mind as an eventual possibility. Particularly for people who do shorter exposure imaging, a change like this could represent a substantial improvement in imaging efficiency. If I’m using Hyperstar and taking 30-second subs, I can probably save 15 seconds per sub. That’s 50% more photons collected in an imaging session.



Here is a possibility that may or may not make any sense for you folks with this issue, and might even be useful for others that take longer exposures.
Why not support only doing the dither every 3rd or 4th or 6th image? If you are taking 20 or more images, I don’t think you would see any difference in the stacked image. This would reduce the dither time significantly.
And I think would be very easy to implement.


I think this dither optimization request would be a bigger win, reducing the number of dithers, yet still guaranteeing a dither between every sub of the same filter.



The less you dither the more the noise starts to stack up compared to the signal. I can’t tell you how big of an issue this is…but I can tell you it’s not ideal. More randomization is better. I’d want to see the data behind not dithering as much to see if it stacks up (pun totally intended). Sure it will save you time at the scope, but at the expense of your data that you just spent hours acquiring.

Now (as Andy mentions) if you’re rotating filters then there is a benefit to not dithering between filters as you won’t be stacking different filter data together to eliminate noise. However, as with all things with automation, this is not as easy as it seems on the surface. Sure the basic implementation is not that bad. Just Pick the first event and dither there. But then you do a meridian flip and someone gets mad because it dithered on the frame right after the flip and they feel it would have been more ideal to move the dither to happen on the filter where the flip happened.

There are a fair amount of things we can do to better optimize the workflow engine in SGP. But every one of them opens us up to a decent amount of regression. Plus everyone’s system is different so accounting for a near infinity of cases can get quite difficult. What may optimize for one could hurt another. So we then end up with a ton of check boxes. At which point you have to be an expert in “SGP Programming” to use our software and that’s good for no one.



I guess I will have to show my ignorance here. I have always wondered why dithering is so important to everyone. I must be missing something important here.

Seems to me if all of your frames are lined up perfectly, ie. not off from one another by a single pixel, then dithering is important. This is certainly not the case with my polar alignment and my mount. My images on my longest focal length rc12 at 1950mm have a few pixels of drift between them. Does this not produce the same effect as dithering? I have never dithered and have not noticed any problems. I should probably give it a try and see if it makes any difference to my images.

Please let me know where I am going wrong on this. Anything to improve my images.


Your idea seems like the ideal way to minimize the dithering, assuming you are rotating through the filters.
However, I usually do something like the following:
L 5
R 5
G 5
B 5
where this sequence is repeated several times.
I generally do this to minimize the frequency of filter changes.
For those doing exposures of 30 seconds or less, they may also want to minimize filter changes.
In this case, the dither every x frames works well.


I guess that depends a lot on how you image. If you’re using full calibration frames (darks, flats and bias) then dithering is likely less important as you’re subtracting out most of the junk. However there is still random read noise that you cannot subtract out with calibration frames and that’s what dithering helps with. Also you could argue that darks and bias are less useful when dithering as the stacking of dithered images should help to remove this.

If you have a couple pixels difference between frames then you are, in fact, dithering whether you like it or not. You really only need to shift a pixel or two in order to have an effective dither. With my setup I’ll have hot pixels on top of each other for multiple frames if I don’t dither. My imaging scale is about 2 arcseconds/pixel and I have a guide error less than 0.5 pixels, so things stay pretty much where they’re put on my setup.

One thing that the LRGB rotation does not take into account is that you generally want to shoot Lum and Blue pretty high up to avoid atmospheric refraction.



I can sympathize with the reluctance to implement this - particularly if there is no way to know how many of the cameras implement this optional feature in their ascom driver.

I think the other imaging applications that allow mount movement during download - themselves rely on native camera drivers and not ascom. So a simple question is to ask if any of the imaging applications allow mount movement during image download - while driving the camera via ascom. It may be that if sgp allowed this with an ascom camera - it would be the first to do so.

It’s hard to know for sure and speculation is required - just as it is when wondering if a camera driver supports CameraStatus or not - and whether or not it’s worth writing code for it.

As for dithering - for my atik 383l+ if I could dither during download then I would lose almost no time at all - and I would do it every frame - period. And even if you dithered after filter changes, you wouldn’t have a benefit if you were doing pure Ha, or doing one filter at a time.

I guess one solution is just to add more native drivers - just like other imaging apps do. That way you know from the sdk exactly what is supported and how best to operate the camera. Why else would they provide both native driver support - and ascom?