18.104.22.168 was broken and Ken provided a dropbox link to 22.214.171.124 before going to the show. I tested this last night with various settings, creating 12 AF packs. The scope is a collimated 10"RC f/8 and I tried 2x2 binning and 1x1 binning with a range of exposures from 8 to 15 seconds through a luminance filter.
The focus results were inconsistent - I didn’t think the seeing conditions were too bad but successive AF runs produced quite different results and curve shapes. I typically do not get a symmetrical V curve with the RCT; the inside focus is always shallower but in some cases, I had a horizontal line and a slope. I definitely achieved more consistent results if I increased the min pixel size to 3 or 4 and extended the exposure. I noticed that the selection of stars changed on each exposure and wondered if that might contribute to the apparent randomness of the measurements.
Well - I took my AF packs and the jpg image of the graph and ran the AF images through PixInsight SubframeSelector and CCDInspector. I compared the FWHM graphs with SGP’s HFD. They were pretty close, in fact in some cases, SGP produced a nicer ‘curve’. I chose a single star from each frame and measured its FHWM in PI - it echoed the global FWHM value for the entire image. This suggests the global measurement is robust and it might be common mode seeing after all, or maybe my focuser is slipping, etc. This is a RCT and if it is sufficiently defocused, it produces donuts. I tried that on one occasion and it seemed to confuse the measurements.
What it does show however is that the SGP algorithms can only work on what they are given. There is nothing to suggest they are skewing the HFD evaluations and the quality of the AF frames is paramount. If seeing conditions are a problem (they affect Bahtinov mask grabber systems too), I can understand why focusmax and @focus2 algorithms take repeated subframe exposures to remove the randomizing effect. Clearly SGP has from the start assumed a multiple star measurement within an extensive subframe. It would be interesting to see if it is just as reliable with a smaller subframe using these new improved algorithms. If so - would averaged, fast and small subframes be one way to go?
I have also been considering the autofocus exposure times. Longer exposures simply integrate the prevailing seeing conditions, causing a large bloated star (creating a flat-bottomed curve), giving the impression of a stable set of readings, but they obscure the true optimum value. Averaging the HFR reading from several short exposures should produce a more accurate result - since each star may be moving about a bit, but its size will not integrate as a diffuse blob in each case and the individual HFR readings will be smaller. (In the case of guiding, it is only the centroid that is of value, so longer exposures are less sensitive to seeing conditions).