My Requests

I had a few requests before, so I’d like to re-state them in one message so that the awesome developers are reminded. Some of these may have already been implemented.

  1. Enable autoguiding during plate solve frames. SGP actually already does this for the meridian flip routine, I’d just like to have this functionality all the time. I realize that the start/stop between plate solve frames would add some time to the overall routine, but since I typically only need 2 plate solve frames to be centered, it really isn’t an issue. It also avoids me from having to get up out of bed and sit at my computer, manually starting the autoguiding, whenever my scope goes to a new object in the middle of the night.

  2. Give ability to IGNORE pixel error after dithering (basically by setting a really high error value such as 10, which is higher than PHD reports anyways).

  3. If using a DSLR and camera lens (which is becoming much more common now), give ability to specify an f-ratio for the lens. Ideally, this setting should be ignored (left at most wide open) for plate solve frames, and stopped down (as specified) for autofocus and actual exposures.

  4. Per a conversation with developer Jared back in June 2014, using short exposure times often results in failure. For instance, the Canon camera allows for a 1/10 exposure, yet using an exposure of 0.1 results in an error.

  5. Add an option to have SGP give you an actual estimated time on the meridian flip timer, rather than just the time remaining, to avoid the need for math.

Thank you for your hard work on making this awesome astrophotography tool!

-Scott

Maybe I am missing something, but why would you need to manually start autoguiding on going to a new object? When SGP goes on to a new target it should auto-select the guide star and start guiding - does for me.

I too have wondered about this. I have to assume that the error is RMS. I think the user-set “time” should start with the first guide exposure that is within the specified distance from the new tracking point. It does not seem to behave that way, but maybe I am wrong. OTOH, it is not like one loses more than 30 seconds or so on the average exposure so given that most exposures are at least 5 minutes it would have a less than 10% time savings, at best.

@CCDMan -

Regarding your first question, I use ANSVR with SGP and use plate solving for my slewing. I’m in a unique situation because I use a DSLR and I do a lot of narrowband from home. After exhaustive testing, my plate solve exposures that work consistently for Ha and OIII are 60 seconds - longer than my mount can typically track unguided. Thus, when it’s time to do a new object (or meridian flip), I manually start the autoguider for each plate solve frame so that I get round stars and the plate solving will be successful.

Regarding the second, the error is the number of pixels off center as reported by PHD.

OK, I understand why you want to do that now.

Please don’t take this the wrong way but I have to say that using narrowband filters with a DSLR, while it may be necessary for some equipment and locations, is sub-optimal. It throws away most of the sensitivity of the system by basically stacking filters where maybe 1/2 or more or more of the pixels are getting no light at all, which accounts for the long times.

That is why I am not a fan of DSLRs for deep sky astronomy, especially from sites that are not very dark and may require narrowband filters.

Just FYI… having many requests on one thread makes them very hard to track (and they will likely be lost in the shuffle to boot)

I believe this is a non-issue in SGP 2.4.

PHD1 had a limitation that it would never report an error greater than 2.55 pixels. The new server interface in PHD2 does not have that limitation. SGP 2.4 uses the new interface, so you should be able to get what you are asking for by specifying a large value like 10.

Andy

@CCDMan - You do raise a valid point. Ha is the most limiting, because only one of every four pixels is red-sensitive. O-III is less of a problem because three of every four pixels is either green or blue sensitive. My primary usage is adding that narrowband data to enhance RGB images. Take a look at my astrobin gallery to see what I’m able to do not only with narrowband imaging, but doing that narrowband imaging from a red zone. Scott Davis's gallery - AstroBin

@Ken - I realize that, but I thought you might appreciate me not bumping five feature request threads and monopolizing the board. Furthermore, since I am merely re-posting my feature requests from some time ago, I was hoping that they had already been documented in some form, or already implemented in a beta.

@Ken - Do you want me to go ahead and post this as five separate requests? Please let me know and I will. I really don’t want my requests to get lost.

Please. I am staring to organize these into categories and I don’t want to “bulk judge” your requests.