Focusing results with

#1 was broken and Ken provided a dropbox link to before going to the show. I tested this last night with various settings, creating 12 AF packs. The scope is a collimated 10"RC f/8 and I tried 2x2 binning and 1x1 binning with a range of exposures from 8 to 15 seconds through a luminance filter.

The focus results were inconsistent - I didn’t think the seeing conditions were too bad but successive AF runs produced quite different results and curve shapes. I typically do not get a symmetrical V curve with the RCT; the inside focus is always shallower but in some cases, I had a horizontal line and a slope. I definitely achieved more consistent results if I increased the min pixel size to 3 or 4 and extended the exposure. I noticed that the selection of stars changed on each exposure and wondered if that might contribute to the apparent randomness of the measurements.

Well - I took my AF packs and the jpg image of the graph and ran the AF images through PixInsight SubframeSelector and CCDInspector. I compared the FWHM graphs with SGP’s HFD. They were pretty close, in fact in some cases, SGP produced a nicer ‘curve’. I chose a single star from each frame and measured its FHWM in PI - it echoed the global FWHM value for the entire image. This suggests the global measurement is robust and it might be common mode seeing after all, or maybe my focuser is slipping, etc. This is a RCT and if it is sufficiently defocused, it produces donuts. I tried that on one occasion and it seemed to confuse the measurements.

What it does show however is that the SGP algorithms can only work on what they are given. There is nothing to suggest they are skewing the HFD evaluations and the quality of the AF frames is paramount. If seeing conditions are a problem (they affect Bahtinov mask grabber systems too), I can understand why focusmax and @focus2 algorithms take repeated subframe exposures to remove the randomizing effect. Clearly SGP has from the start assumed a multiple star measurement within an extensive subframe. It would be interesting to see if it is just as reliable with a smaller subframe using these new improved algorithms. If so - would averaged, fast and small subframes be one way to go?

I have also been considering the autofocus exposure times. Longer exposures simply integrate the prevailing seeing conditions, causing a large bloated star (creating a flat-bottomed curve), giving the impression of a stable set of readings, but they obscure the true optimum value. Averaging the HFR reading from several short exposures should produce a more accurate result - since each star may be moving about a bit, but its size will not integrate as a diffuse blob in each case and the individual HFR readings will be smaller. (In the case of guiding, it is only the centroid that is of value, so longer exposures are less sensitive to seeing conditions).


I have just seen has been posted but have not yet have the opportunity to test it. I did however experiment further with and had considerably better success.
I normally focus with Lum, using 2x2 binning and 8 second exposures. Last night I was getting repeatable V curves using Red, 2x2 binning and 3 second exposures. These changes were not the only enabler - I saw a forum note that the minimum pixel size slider was interactive.
With the slider on 4, I noticed a single ‘star’ during an AF run (HFR about 0.7) that never changed value and on closer inspection, it was around a pair of warm pixels. I started again and stopped the AF routine after the first exposure and fiddled with the min pixel slider.
With 2x2 binning, I had previously used 2, 3 or 4, min pixel, thinking that these would be more than sufficient. At all those settings, I noticed that I had several instances of false star detection on the screen - looking up close, the program was classifying close warm pixels as a star (but the pixels were not touching). Increasing the slider to 5 made these disappear and the average HFR increased as a result. With this setting, I had just 3-5 star matches over the entire screen but these provided consistent V-curves, with steeper wings. Since the new HFR measurement seems more robust to donuts (when using the correct min pixel setting), I also expanded the AF range by altering the step size (in my case 11 steps of 25x4microns). This extended the slopes of the V-curve for the linear fit algorithm to accurately latch on to.

FWIW, these are my observations so far .

  • Seeing conditions, especially in smaller FOV’s, are typically common mode, so there is only a small gain from sampling large quantities of stars. (I have seen this repeatedly when collimating my RC during star testing.)
  • Take an out of focus AF frame and find the min pixel setting that reliably detects stars in the frame.
  • In poor seeing conditions, doing AF through a red filter can give a small advantage.
  • Find a happy medium on the AF range. Set it so that you do not restrict solely to the bottom of the v-curve but pick up several points on the linear portions either side of focus. For those with obstructed telescopes, this was tricky before as the old algorithms were confused by small donuts.
  • Very short exposures (~<3 seconds) are prone to seeing, which may distort star shapes and interfere with reliable detection. Very long exposures (~>10s) integrate seeing and increase minimum HFR readings and flatten off the bottom of the V-curve.

One thing that did come to mind - I still have quite a few hot pixels on my QSI683 AF frames and I wondered it the calibration needs improving - are the dark frames scaled? - PHD2 offers dark frame or hot pixel calibration - Is hot pixel mapping something worth looking at?



The bottom of V should not be that important. The green lines should fall near the center of the U.
It is nice to have a true V with a low point on the center of the V. This has to do mostly with seeing.


Very good information Buzz. I am going to give it a try. Still seeking good results the Pinpoint based focus is ok but after getting great results one night, I am back to inconsistent results.