Auto-Focus with bad seeing conditions

Hi Folks been off of here for a long while after a 2500 mile move to a much more well lit part of the country (Seattle Area).

Question(s)

  • Can I expect the autofocus routine to work in less than stellar (pun intended) seeing conditions? The best I can get shooting out of my garage is 3.67 HFR.

  • Is there any point of dinking around with FWHM and will help? (side question is what is the real difference and benefit of one over the other?)

Info:
WO Star 71
Canon 600D (full spectrum)

Thanks!

Andrew

I did a search through the forum archives and this post is the closest I found to what I think my problem is, so here I go:

I’ve gotten AutoFocus to work once or twice with my gear, so I know it’s possible, but it’s not reliable. My theory is SGP is “chasing the seeing” and losing. My gear: permanently mounted MI-250, CFF 14" f/15 classical Cassegrain (5250mm focal length, yes, very long), QHY 268M camera, Optec ThirdLynx focuser. Currently, autofocus exposures are set up in SGP to be binned 2x2 and I’m using the HFD algorithm. I’ve tried 7, 9, and 13 data points.

What I observe is: when I’m lucky, I’ll get three or four data points which trend smoothly downward toward optimal focus, then the HFD value will jump (I’ve seen it jump from 3.xx to 10.xx between 2s exposures) and I’ll get an “outlier” which messes up my saddle. If I’m unlucky, instead of even the beginning of a nice saddle, I’ll get a saw-tooth shape with no pattern whatsoever.

I’ve had some success performing the AutoFocus routine manually. I’ll take three or four exposures and make a mental average of the HFD value. Then I’ll move my focuser a pre-determined number of steps and take three or four more exposures, average, determine if I moved in the right direction and continue with the process. I looked in the AutoFocus settings to see if there was a way to ask AutoFocus to take multiple exposures at each focuser position (to “average out” the seeing), but didn’t see a way to do it.

If that’s not a feature of SGP, I’d like to request it.

If your camera has the ability to bin 4x4 I’d use that. It gives you the smallest stars, and the widest scale. Also try setting your exposure up to 5 or 6 seconds. With a really long focal length the longer exposure time will help to average out the star wiggle from the atmosphere. If you are using a narrow filter a bit more exposure time doesn’t hurt. I use 7 data points with 350 steps on my Optec focuser on my Meade LX600. This puts my HFR at about 5 at the outer edge. I also make sure when ever bin 4x4 is selected for the focus that the camera is also set to high gain.

John

AutoFocus via long focal length (or just star-poor regions) is a known weakness for SGPro. In addition to account for variations in seeing, SGPro also needs to have a better understanding of when the law of averages (with outlier rejection) will actually be detrimental to the overall result (i.e. whole image HFR). I am not 100% sure what will make this better, but ideas are welcome. In the past, I have considered automatically switching to monitoring of the median star (not on the edge), but I have no data or anything to show if this method would be better.

AF7JQ, thank you for your suggestion. I already bin 2x2 for autofocusing, but haven’t tried 3x3. My camera won’t bin 4x4. I haven’t tried exposures quite that long for autofocus, but I’ve tried as long as 8s for guiding. That didn’t work.

Ken, when I asked a friend of mine about autofocus this weekend, he suggested using multiple exposures for each data point so the software could average the HFD or FWHM. I suppose my friend doesn’t use SGP for his autofocusing; as I said, I looked for a way to take multiple exposures per data point but didn’t see that feature. I wonder if that would work? Since seeing is often a short-period transient, three or four exposures might be enough to get a decent average. The algorithm could even reject outliers- that would be easy with my imaging train since I’ve seen my HFD jump a bunch as I said. If the algorithm got one step more sophisticated, it could remember the averages from one autofocus run to the next (within the same “autofocus event”) and refine the criteria for a “good star” versus a “seeing compromised star image”.

I’d be happy to beta test anything you come up with. My observatory is remote, so beta results would not be immediate. :slight_smile:

When you see a massive change in HFR, can you see what the whole-image HFR is picking up when it does so? Nebulous objects that are more present in some frames than others? Just more stars in some than others?

Being remote always makes things more difficult…
Something else to consider with big scopes is the focus change caused by temperature. If the temperature is quite a bit different from the last imaging session then a auto focus routine might not work because the new focus is outside the curve. On my scope, once you get beyond about a HFR of 6 or 7 the star doughnuts start getting large enough that SGP starts seeing bright points within the doughnut as stars and and tries to focus them. My fix for that is to run frame and focus and use the course adjust button in the focus module until the HFR gets down to something more reasonable…2 or 3, then the auto focus routine works fine.

John

Reducing your step size yields no better results? It might create a sitation where it doesn’t push out to HFR 6 or 7, but at the same time not produce enough of a delta to provide any kind of meaningful results.

Good Morning Ken,
My scope is a LX600 12" F/8 (F/5 with focal reducer). Because I don’t have mirror flop to deal with my focuser is mounted on the scopes main focus knob rather than in the optical path so I am focusing by moving the mirror. My results may be different from a F/10 scope focusing in the optical path. My normal settings are 350 step size with 7 data points. If I use 9 data points, the right side of the hyperbola gets flattened out because the star doughnuts get large enough that SGP no longer sees them as stars. Usually, anything beyond HFR of 5 is unreliable. If I reduce the step size to 300, thinking it might be more accurate, the bottom of the hyperbola flattens out. This causes the optimum focus point to become more nebulous…repeated focus runs return different numbers. Also the smaller step size means the HFR jump between data points is smaller, and atmospherics can cause a data point to be outside the curve. Overall, your focus routine allows me to get repeatable results every time once I have accounted for changes in temperature from one session to the next.

John

Ya, I figured you had experimented with this, but thought I would ask anyhow (at least just for my own understanding).