Two feature questions


#1

I had one request and one request for progress update:

Progress update: What is the progress on the multi-camera feature? My summer imaging season is almost here so I was wondering if it might be available to test soon.

Feature request: I have been getting offset data for my filters on a new camera for the past couple nights. In order to get good hard numbers I do five passes for each (non-luminance) filter. Basically, I focus with luminance, then immediately the filter in question. I enter the two points in a spreadsheet that figures the difference. This is done 5 times for each filter and the average difference is calculated to yield the offset. This is very tedious and should not be at all hard to automate. That is my request.


#2

Regarding filter offsets, I might suggest that you are over-thinking it a
little. You should be taking offsets when the temperature is very stable.
If the temp is stable then you don’t need to take 5 iterations. I have
never done more than one and it has worked well. I might also suggest that
you record the LUM immediately after the other filters focus run, the
reason being that it usually takes much less time to do a LUM focus run
than say a narrowband filter. So if the temp does change slightly while the
narrowband filter focuses, when you run the LUM focus it will be a closer
match to the narrowband filter focus point.

You may have a good reason for doing 5 iterations that I am not aware of,
but I think it’s overkill.


#3

Yes - statistical variation. You simply will not get the same results every time, even with a perfectly stable temp.

Apparently you misunderstood my method as well. I take Lum first, then the filter. This is done RIGHT AWAY after the Lum. No way would you do lum and then each filter, way too much time for things (mostly temp) to change…

  1. Lum
  2. Filter #1
  3. Lum
  4. Filter #1
  5. Lum
  6. Filter #1
  7. Lum
  8. Filter #1
  9. Lum
  10. Filter #1

The another 10 using Filter #2, etc

IMHO doing offset calculation just once is asking for trouble and my numbers clearly show this to be the case. Here are offsets for the blue filter taken as shown above, one right after the other with a stable temp:

115,108,143,102,131

In fact, that run was not too bad and I redid a couple that were larger outliers. If you just do one run and you hit one of those on a one time deal, you are gonna use a bad offset. Multiple runs is simply the only way to deal with this.

This is not a cheap or badly set up system, BTW. It is a Starlight on a TOA-130 (also FSQ) with the HSM motor.


#4

I get why you’re doing it, and if you are getting outliers like you illustrate with the blue filter then yes averaging out several runs is helpful. That just hasn’t been my experience at all. I don’t get those kind of outliers so there is no statistical reason for me to do more than one run. On rare occasions when I am watching the focus run I’ll get a funky data point. In that case I’ll run it again to be sure but even then the focus point comes in the same or within a few steps, well within the CFZ.

Also, I wasn’t suggesting using the LUM filter and then all the other filters in a row. In fact I was suggesting exactly what you are doing just with each filter one time instead of 5 times.

I didn’t mean to hijack a discussion…this is a feature request and Ken and Jared will weigh in on the possibility of implementing it.


#5

It depends on what you mean by multi-camera support. I think we are still a ways away from true support where, targets, from the same sequence can effectively farm out events to other cameras, but we are pretty close to just providing simple dithering coordination.


#6

Hmm, not sure what you mean by:

I think what I mean is what you mean by:

Example of what I mean:

You are imaging, say M51. You have two scopes of similar (not necessarily identical) focal length, each with their camera and filter wheel on one mount. You want to do color exposures with one scope/camera at 10 minute exposures, and luminance on the other scope/camera at 5 minute exposures (maybe a bit less would be better). You want to do focus as required on each as well as dithering (let’s assume the same scale dither move is OK for both scopes/cameras since focal lengths and pixel sizes are not horribly disparate). One camera only is doing guiding/dithering.

Basically one would want to just be sure that dithering is not happening when the non-guiding camera is exposing. The system would wait until any ongoing exposure is finished before dithering and not start the next until the dither is complete. This would mean, to me, that the system with the longer exposures would probably be the guiding system and that one would just “fit in” the shorter exposures as well as possible, recognizing that one could not get a full 2X productivity.

Is that what you mean by “simple dithering coordination”


#7

Somewhat. There are lots of ways to tweak this to make it better/worse but our initial thinking is that things will enter into a “needs dither” state. Once a “needs dither” state is entered it will wait until a dither has happened. All systems must report “needs dither” before the dither actually happens. And “needs dither” would stop any other images from starting on any other systems.

Example:

  • 3 minute exposures on one system
  • 5 minute exposures on another system

Process:

  1. 3 minute exposure finishes first. goes into “Needs Dither State”
  2. 5 minute exposure finishes. Goes into “Needs Dither State”
  3. All clients are in “Needs Dither State”
  4. Dither happens and notifies clients dither has happened
  5. Clients start images.

Example 2

  • 3 minute images from one machine, dither every 3 images
  • 5 minute images from another machine, dither every image

Process:

  1. 3 minute image finishes
  2. 5 minute images finishes and goes into “Needs Dither”
  3. 3 minute image finishes
  4. 3 minute image finishes and goes into “Needs Dither”
  5. All clients are in “Needs Dither”
  6. Dither happens and clients proceed.

You can see how this can get “non-optimal” very fast when using “dither every X images”. Essentially it will be a balancing act, and initially it will be up to you to balance that ball.

Hope that helps,
Jared


#8

Hi Jared,

happy new year to you!

I am just curious, whether you have any update on the “dither coordination” functionality? Will it be implemented and when will it be available?

Thx
Ralf


#9

No update at this time.

Thanks,
Jared


#10

Outline your have presented here Jared looks perfect from my perspective. Can’t wait for it to be available. Keep up the great work.


www.mainsequencesoftware.com