Event Sequencing Proposal


There have been various requests to be able to control the order of events on a finer scale. The current Target/Event model is problematic because of the way events are grouped into targets. A target has a Ra/Dec position but Darks and Bias frames don’t use the position and Flats will likely use a different position. Also, the event start/stop times and the desired altitude constraints only apply to light events. Lastly, dark and bias frames are rarely done at the same time as the light or flat frames and are not associated with a particular target. If you think about it, the current Target/Event system is really a mount positioning operation followed by some exposures.

Consequentially, the proposal is to have three types of events and no targets. The events would be position events, exposure events and grouping events. All events would have a run flag.

Position events would have Ra/Dec, Rotate to, Slew to, Center, and a name. Position events could specify the Park position as an alternative to Ra/Dec. (Name could be optional?)

Exposure events would be the type (light, flat, dark, and bias), exposure time, binning, repetition count, file suffix and filter (for only light and flat).

Grouping events would have one or more sub-events (position, exposure or group). A group would specify sequential or rotating execution. Rotation groups would specify how many exposures to do on each round. A group may have optional constraints (start/stop times & altitude limits) and an optional name. (Name could be required?)

Like the current implementation, events would be executed in the order they appear with the exception that group rotation parameters would be applied and any time or altitude constraints would cause events to be delayed until the constraint was satisfied.

The difference between this proposal and the current method is that you have direct control over the sequence events, it factors parameters into related events, and you get the added benefits of group events.

Consider several scenarios:

  1. A simple target - One position event and one or more exposure events
  2. One target and rotate through the filters – One position event followed by a rotating group which would contain exposure events for each filter.
  3. A constrained target - A group event with a constraint which contains a position event followed by one or more exposure events.
  4. A constrained target with rotating filters - A group event with a constraint which contains a position event followed by a rotation group event which contains exposure events.
  5. Expose luminance followed by rotation of RGB filters - A position event followed by an exposure event for luminance followed by a rotation group which contains the RGB exposure events.
  6. Rotate though targets doing a few exposures for each - A group rotation event which contains a position event for the first target then exposure events for the first target and also contains another position event then exposure events for the second target.
  7. Expose a target then do flats – A position event for the target followed by target exposures. Then a position event for the flats followed by flat exposure events.
  8. Expose some dark and bias frames – Exposure events for darks followed by exposure events for bias. Notice there is no position event.

There is a question of how to name exposure files. The current system can include the Target Name which does not exist in this proposal. In its place, a name can be constructed by using the names of the enclosing groups followed by the last executed position event.

It would be handy to add some sanity warnings for sequences. For example, a light exposure event with no preceding position event is suspect. Note, however, that such a sequence is quite legal and can be run if that is what the user wants.

There is a backward compatibility issue with old sequences, however, the mapping from the current method to this proposed method is straight forward and could be done at the time an old sequence is loaded or as a stand-alone conversion tool. Lastly, the Framing and Mosaic Wizard would need to change.

In summary, what is being proposed is to eliminate the target/event model and substitute three types of events which are refactors of the current targets/events to collect together related parameters and allow flexible use of constraints and events sequences.


There is a lot to consider here. I will let our developers chew on it a bit.

I have noted that flats on a taken on wall fixed boxed don’t have a sky RA/DEC. They might have park location position or ALT AZ. Likewise the rotator position is important if you have one. You might want to pare it to a target using the same rotation.
I see how this could be refined or made a little clearer

Bias and Darks.
It would certainly be nice to have a way take only calibration frames only. ( For box flats I don’t even open my dome anymore. )


I think the proposal encompasses your comments. A position event can have Ra/Dec or the park position. The rotator position is included in the position event. So either of the following could be done.

  1. Position the mount and rotator followed by light exposures followed by flat exposures.
  2. Position the mount and rotator for lights followed by light exposures. Then position the mount for flats with the same rotator position as the lights followed by exposures for flats.

The big advantage of this proposal is that positioning events are separate so that you can control when they occur relative to exposures. Then the grouping events allow fine grain sequence control with loops and constraints.


One small addition to the proposal. The current Target/Event scheme has a time delay before starting and delay between events. These delays should be added to the Group Event.


I’ve had a rough day and I’m pretty tired so I admit that I haven’t really given your proposal the thought that it deserves. However, my initial reaction is “No”. I feel that the current system works, many of us understand it and many others are just learning it. A major change in workflow would cause a certain amount of initial confusion and would take the developers away from some of the great improvements - like the focusing - that they are currently doing and looking in to.

Upon further reflection, I may agree with you when I’ve looked at it more carefully. Your proposal certainly deserves more than a gut reaction. Still, I think we have to be careful that we don’t change the main workflow without very substantial benefits


You said it much more eloquently than I.



I am new to SGP. I have wondered how best to setup calibrations frames but themselves without light frames.

I am still not sure how make a calibration set without a sky target.
Perhaps I missed this in the documentation or it just not fully detailed?



Not as big a change as you might think. When compared to the current system, the target positioning has been factored out to a separate (position) event. The conversion between the current system and the proposal is quite straight forward. The target positioning information becomes a position event followed by the exposure events in the target. If you understand your current sequence, you will also understand it after it is converted.

Ok, if it is such a direct conversion why change it? It allows you to specify the position information independently of the exposure information, and, when combined with group events allows greater flexibility than now possible.

The original proposal lists a number of scenarios that are possible with the new system that are not doable with the current method. There have been a number of threads on this forum that have asked for sequences that this proposal will allow that are not currently supported.


Max, I don’t know about your exact situation, but I’ll tell you how I set it up and how it has been working well for me for over a year.

I have a calibration sequence (calibration.sgp) That has the following targets:

  • Bias and Dark
  • FSQ Flat-Dark
  • FS-128 Flat-Dark
  • AT10RC Flat-Dark

All of these can use the Luminence filter, since the the shutter is closed. The only equipment connected for this sequence is the camera and filter wheel. I run these with the mount parked and powered down, the dome closed, and on nights (or days, if it’s colds enough) when I can’t image, once every 4 months or so to build my libraries. I actually have multiple copies of this sequence set for different temperatures and different subdirectories, just to reduce the screw-up factor.

For my light sequences, I simply add a target called “Calibration” that has the Flats for whatever telescope I am using for that sequence. Since I run sequences that span months (or even years, staring next week!) I clear completion on the Calibration target, change the subdirectory (001, 002, 003, etc) when I have to move the camera and, thus, need new flats.

For example, my “Spring-Summer-2016_FSQ” currently has the following targets:

  • Calibration (flats for the FSQ)
  • Cygnus-Loop-1 (LRGB for mosaic panel 1)
  • Cygnus-Loop-2 (LRGB for mosaic panel 2)
  • Cygnus-Loop-3 (LRGB for mosaic panel 3)
  • Cygnus-Loop-4 (LRGB for mosaic panel 4)
  • Cygnus-Loop-NB-1 (Ha/Oiii for mosaic panel 1)
  • Cygnus-Loop-NB-2 (Ha/Oiii for mosaic panel 2)
  • Cygnus-Loop-NB-3 (Ha/Oiii for mosaic panel 3)
  • Cygnus-Loop-NB-4 (Ha/Oiii for mosaic panel 4)
  • Markarian-Chain (LRGB for galaxy chain)
  • M81-M82 (LRGBHa for galaxies)

All of these targets are rotated at 120 degrees, so they can all share the same flats. I split the narrow band in the mosaics as a convenience–I shoot them on moon nights and it’s easier to enable/disable the targets rather than events within a WB/NB target.

Anyway, this is how I have been managing calibration and image targets for over a year and it seems to work well for me. I hope you find and idea or two worth trying.

  • Shane


Thank you for the detailed write up. It’s always interesting to get others’ takes on how things are approached. The targeting system in SGP is not perfect (find me one that is!) but I think it’s pretty good and flexible as is. The only thing that is not straightforward (but possible) is “Rotate though targets doing a few exposures for each”.

I think your proposal gives us a lot of “how” but not really a lot of “why”, so I don’t really understand what deficiency it’s trying to address.

Here is how I would construct these in our current scheme:

  1. A simple target

Single target (no positioning I’m guessing?)

  1. One target and rotate through the filters

Single target with “rotate through events”

  1. A constrained target

Single target w/positing info

  1. A constrained target with rotating filters

Single target with “rotate through events”

  1. Expose luminance followed by rotation of RGB filters

2 targets:
Target 1: Lum with positioning info:
Target 2: RGB with “Rotate through events” w/o positioning info…unless you want to recenter.

  1. Rotate though targets doing a few exposures for each

X Targets (you define how many “rotations” by adding copies of each target)
Target 1 - First object w/centering info
Target 2 - Second object w/centering info
Target 1-1 - Copy of Target 1 (centering info copied)
Target 2-1 - Copy of Target 2 (centering info copied)
… and so on.

  1. Expose a target then do flats

Target 1 - Object
Target 2 - Flats target (likely created with Flats Wizard from the “Tools” menu. Set your park position to be pointed at your flat panel, set “Park” as a pre event) As a bonus set “rotate camera to” in this event to have your rotator match up.

  1. Expose some dark and bias frames – Exposure events for darks followed by exposure events for bias. Notice there is no position event.

Target 1 - Your darks and bias with no center/slew data.

You can certainly take Dark/Bias now with only your camera attached.

Flats are somewhat linked to lights. At least where the rotation angle is concerned. You can do this in different ways but the most straightforward is to use the “Flats Wizard”. However I did just notice that if “rotate to” is checked in the parent that it doesn’t move to the flats target so that should be addressed.

As for darks and bias I generally build a library once a year, or there abouts, and don’t bother with them again unless I take exposure lengths that fall outside of my library. That’s usually a 24 hour process with the camera sitting in a closet clicking away.



Without going through each of these again, I think these two examples show the tradeoffs of the two methods.

In the first case, suppose you also had a constraint? You would have to make sure the constraint was the same for both targets and the two targets would not necessarily be coordinated. Using the proposal, there would be one group with a constraint with a single position event followed by the exposure events. The proposed method more directly states the order of events and indicates the constraint once.

The second case is a clear example of why the target/event model is awkward. You are having to repeat targets while hoping to not change something between copies. Again, if there was a constraint it would also have to be repeated. On the other hand, the proposed system directly indicates what is happening and can be done with a single constraint.

Let me suggest another example. Suppose you where doing a mosaic with a constraint. Again, you would have to restate the constraint for each target as opposed to giving one constraint for all the titles. Or, suppose I wanted to rotate through the tiles doing say 5 exposures at a time? Often, users want to do such rotations to assure they get something for each of the tiles rather than complete one tile and have nothing for another.

The proposal is not a huge departure from the current system. Rather it factors out common parameters into logical events so when you try to construct complex sequences you can directly state what is desired. With the Target/Event model you can sometimes achieve the same thing by replicating targets but it is not as clear what is matched with what.

The second case arose from a request by a user and is what got me thinking about the current event model. It is very awkward to do with the current system. What I did initially was build a spreadsheet of all the parameters verses what they apply to. After looking at it for a while I realized there were three basic operations which was the genesis of the three event types. I went through several generations of the proposal trying to think of all the situations that users care about.

The Target/Event model mashes the position event and the group event into the Target. But it works much better if they are separate events disconnected from the exposure events. The group event is particularly useful because you can specify a single constraint for many events and groups can be nested in other groups.

I understand that it is not trivial to make a change like this. Much of the internal links you have for coordinating things like meridian flips, dithering, etc. would apply to the proposal much as they do now. The proposed three events are stand alone and relatively independent. I don’t have to create a target to do bias or darks. I don’t have to create targets with no position data to get rotating exposures. I don’t have to duplicate targets and constraints to achieve sophisticated sequences. The three event types better reflect the physical world you are trying to map.

Lastly, UI for the proposed system would be clearer. With the Target/Event model you have a two level UI where the Targets are listed then for the selected target the exposure events are shown. If I want to see any target parameters, I have to click on an icon to bring up another dialog. With the three event model, all events could be shown in a single list where events contained in a group event could be indented to indicate the relationship. All events would be shown together in context - no going back and forth between target and event data.