Sequence Aborted. Why?


#1

Had the sequence self abort after two images. I was not watching so have no certainty as to why. Snip of what should be relevant part of eventual 247 meg SGP log file is at:

http://nightskypictures.com/temp/log.txt

And the last part of the PhD2 log file is here:

http://nightskypictures.com/temp/PhD2_log.txt

Reviewing my all-sky camera images from last night as well as my seeing monitor (see post below for those) it appears a brief patch of clouds passed about that time. I have a Boltwood which did pick up the clouds but it did not quite meet the criteria for roof closure so that was not the cause of the abort. Does the log file support clouds as the reason? It looks like that to me but I am not used to interpreting the logs just yet.

Arghh, the worst part is that after the very brief (maybe 15 minutes) of cloud passed the seeing got super good
(arcsec levels according to the seeing monitor) and I missed it all. Bad news as that kind of seeing happens maybe 4 times a year!

So (assuming I was right about the cause) is there any way to have PhD/SGP try and resume after “X” amount of time?

Even more sophisticated would be to be able to monitor a cloud sensor to be able to pause or start or stop depending on conditions.


#2

Yes, if you go under ‘Tools’, ‘Options’, and ‘Sequence Settings’, you’ll find recovery mode. It’ll basically try for an hour and a half to re-acquire by re-slewing/platesolving/focusing/settling.

My guess is you had an unlucky cloud go by and it got you.

They’re adding weather station support in 2.4… but it’ll only shut things down based on whatever your bolt wood considers ‘unsafe’. They have no plans for adding automatic recovery like some of us are used to in CCD Commander. Maybe if a few more ROR guys get in on this we can make it happen :).


#3

OK, thanks, I guess that is unchecked by default, I will turn that on.

Very little traffic on the PhD Yahoo site lately so I am not sure I would get an answer there very soon. I suspect folks who are also SGP users are coming here instead for issues with PhD since the Yahoo groups interface stinks so bad by comparison!

Cloud sensor output shows the brief blip of clouds:

And Seeing Monitor shows why loosing last night made me a bit unhappy (also shows the cloud passing):

Interesting on the weather sensor. Nice idea to add. Of course, my roof is connected directly to my cloud sensor already and will close if it goes “very cloudy” (last night only just made “cloudy” briefly) so all it would do is shut down more gracefully.

I seems to be missing a lot of obvious (OK, some not so obvious) settings. Time to re-read the manual…


#4

It looks like PHD failed to find the guide star. Status of 4 is “Guide Star Lost”

From the SGP Log:
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] Resuming auto guiding (settling)…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] PHDA resuming…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] PHDA: Sent command (PHD_GETSTATUS)…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] PHDA: Received (4)…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] PHDA pause state is same as request, returning…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] PHDA: Sent command (PHD_GETSTATUS)…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] PHDA: Received (4)…
[6/1/2014 11:12:08 PM] [DEBUG] [Sequence Thread] Request to settle auto guider has failed. Guider reports it is not guiding…

And the PHD Log:
23:12:08.201 00.000 5124 processing socket request GETSTATUS
23:12:08.201 00.000 5124 case statement mapped state 6 to 4
23:12:08.201 00.000 5124 Sending socket response 4 (0x4)
23:12:08.201 00.000 5124 read socket command 17
23:12:08.202 00.001 5124 processing socket request GETSTATUS
23:12:08.202 00.000 5124 case statement mapped state 6 to 4
23:12:08.202 00.000 5124 Sending socket response 4 (0x4)
23:12:08.618 00.416 5124 read socket command 17
23:12:08.618 00.000 5124 processing socket request GETSTATUS
23:12:08.618 00.000 5124 case statement mapped state 6 to 4
23:12:08.618 00.000 5124 Sending socket response 4 (0x4)
23:12:12.183 03.565 4204 Exposure complete
23:12:12.217 00.034 4204 worker thread done servicing request
23:12:12.217 00.000 5124 Processing an image

You might want to turn on recovery mode like Chris mentioned. It would try and recover from this state for 90 minutes.

Thanks,
Jared


#5

OK, that is what I figured. It was too much of a coincidence to have had some other failure just as the clouds passed.
I have just enabled recovery mode.

Thanks!


#6

Exact same thing happened to me last night - a small patch of cloud aborted my sequence (missed all my blue), but here is my concern about recovery mode. If it clouds over just before a meridian flip is to take place will the mount continue to track for another 90min? If so I would have a pier crash. Or is there a fail safe built in that will force shutdown to avoid a pier crash?
…Keith


#7

SGP has no way of knowing what is a safe condition for your equipment. Therefore we can’t make decisions based on safety. I would recommend setting up limits via your mount driver to avoid a pier crash.

So in that case your mount would hit the limit and stop tracking. Recovery mode would continue to fail for 90 minutes after which it would shut things down.

Recovery mode can be risky for the reasons you mentioned. You have to weigh the benefits (getting more data) with the potential negatives (pier crash, etc)

Thanks,
Jared


#8

Indeed. Most mounts have safe limits in software or firmware, often configurable. I know mine also has a hard stop. If it hits the hard stop for any reason it will stop trying to move (and beep annoyingly at you until you deal with it).

Other recovery mode issues can be roof strikes, cable tangles, etc. To be honest, if a system is set up properly mechanically it should have had those issue minimized or eliminated. It is one thing to use software safety, it is much better to make accidents mechanically impossible in the first place.

An example (not SGP related) is the way I decided to do my roof motor setup. There is one micro-switch for normal stops, a second set to cut off for the power from the power supply, and a third to cut power to the power supply as well. If all those fail the gearing is set up so that it will just run out of gear and spin instead of running the roof off the track. Belt and suspenders is always best!


#9

Thanks for the confirmation that this is a potential problem. Unfortunately I don’t think my Mach1GTO has mount limits? However, I’m not sure why it’s such a problem to build a fail safe into SGP to avoid a pier crash. Doesn’t SGP know when it will hit the meridian (it’s counting down “Time to”)? Wouldn’t it be relatively straight forward to have an option to truncate Recovery Mode when it hits the meridian as a fail safe?
…Keith


#10

Trying to figure out why the Sequence aborted . This has happened twice now. Is it a PHD problem ?? Also, I have notifications working so why didn’t I get an email/text message ?? I know notifications is working right. I think it all went south around 3:20 am

Logs… phd logs I broke into two to save space , (beginning and end of problem)

Thanks,
Dennis


Confused Why Sequence Halted Without Attempting Recovery
#11

Ya… there looks to be a timing related bug here. It has been addressed in a 2.5.2 beta (not yet released). We will discuss if it needs a maintenance release.

No good reason, you just found an error path that was neglected. This has also been corrected.


www.mainsequencesoftware.com