Auto Focus Beta Consolidation Thread

I think Ken mentioned in another thread that the maximum star size is a function of the minimum star size - the maximum being 10 times the minimum. I think this can be problematic and goes to your observations. With the minimum star size set low enough to get the small, tight stars near focus counted, you end up bounding the maximum star size too tightly, and the routine does not pick up enough stars when well out-of-focus.

I get the impression that the 10x scaling of maximum to minimum is rather arbitrarily selected for the purposes of beta testing. I wonder if a better final approach would be adding the ability to select both the maximum and minimum star size. It seems that would give the best of both worlds - allowing the small stars near focus to be selected while still not rejecting the bigger donuts when out of focus. I understand the reluctance to add yet another user-settable parameter, but it would simply be trading the previous two parameters (stars selected and nebulosity rejection) for two new ones (minimum and maximum star size)

Another option would be to up the scaling factor a bit. Maybe a factor of 15x or 20x instead of 10x, but that would not necessarily be a one-size-fits-all solution any more than the current one is. I think I like the idea of allowing the user to tailor the max and min star size their individual rigs better.

Tim

The new AF routine has made some major improvements in its ability to detect donuts in central obstruction scopes, as many of the comments in this thread confirm. However it is not yet quite perfect, but I think is very close to being a world class AF routine.

I suggest the following worthy goals for a final, world class version:
(1) Give accurate and very consistent results over a very wide, out of focus range, well into donut land on obstructed scopes and very large out of focus stars for refractors.
(2) Accomplish (1) without requiring any user customization other than determining a reasonable step size for the focuser.
(3) Accomplish (1) for practically all image targets, or at least the vast majority of targets.
(4) Accomplish (1) for practically all combinations of hardware.

I firmly believe that all 4 of these goals can be readily achieved with a few more tweaks to the AF code.
Since the older version of the AF routine did wonderfully well satisfying all 4 of these for refractors (with the exception of the out of focus part of (1)), these are clearly within our grasp (or at least Kenā€™s).

Here goes my thoughts on how to do this:

Start with the focus curve produced by version 2.5.1.6 that I gave detailed info about in the prior posts, 14 frames from 18000 out to 19040, step size 80, on my 714 mm FL refractor.
All my comments in this thread referring to the refractor are binned 1x1 because they all use my Canon 6D DSLR.
Focus position 18000 is the perfect focus position as established with several runs just prior to this long final run that all the following plots refer to.

The prior runs that confirmed 18000 as best focus were run with step size = 50, 9 total steps, so the max focuser position was 18200, and all gave excellent curves, with HFR close to 1.8 and end points around 4. Excellent results and very comparable to what the older AF routine produced for me on virtually every target I have imaged over the past 8 months (around 60) with no customization of the routine other than step size.

It is important to note that going out to 19040 is enormously out of focus. I had no intention of trying to go that far out of focus, since the initial phase of the run only goes out to 18320, not that far away from my usual range out to 18200. However the routine forced a second phase out to 19040.

It is truly a miracle that the current AF routine actually handled this entire range fairly well, using the tuned MinStarSize of 12. A great credit to Ken for improving the star detection routine to the point where it reliably finds stars so out of focus. The old routine would have croaked big time.

Focus Plot for Version 2.5.1.6 (tuning the Maximum Star Size to 12 gave the best results)

I then picked 3 specific stars to look at the detailed star images for those 3 over the entire 14 frame range.
Stars A and B are among the brighter stars in the full image and are in a small group at the very top of the full image. This star group AB is less than 1% of the entire image. In the best focus frame it contains 20 easily identified sharp focus stars. In the worst focus frame it contains 4 easily identified stars.
Star C is the brightest star in the frame, dead center, magnitude 6, which I was going to use a mask to focus on.

Star group AB at 18000 (best focus position). A and B are the 2 brightest stars toward the bottom.

Star group AB at 18400 (WAY out of focus).

Star group AB at 19040 (ENORMOUSLY out of focus).

Focus plot for just Star A (plot for Star B is essentially identical, so not shown)
Note this plot for Star A (and Star B) is the best of the three plots. Significantly better than Star C and Ver 2.5.1.6 (using 12) near best focus where they both show best focus higher than the true best focus at 18000.

Focus plot for just Star C:
Note this plot for the very brightest star is the worst of the 3 plots, but still fairly good.

Here is the data for the above plots:

Based on the above, I suggest the following tweaks to the AF routine will accomplish the above goals (1), (2), (3) and (4).

A) Concentrate on only using statistics from the fairly bright stars, but not really bright ones, and not the dimmer ones. As demonstrated above the 2 fairly bright stars all by themselves each give excellent focus results. The very bright stars (Star C) and the current version of the routine give worse results, particularly near good focus. Not using dimmer stars is very important if the routine is going to work at the out of focus positions, simply because the more out of focus, the dimmer the dim stars become so that they cannot even be detected. Only the brighter stars can be detected when out of focus.

B) Specifically, find the brightest 10 to 50 stars, eliminate the 1 to 5 very brightest, use the remainder for HFR stats in all frames. Make note of their positions in the first frame for comparison with following frames. If frame #2 has fewer of these specific stars than the frame #1, then probably frame #2 is further out of focus than frame #1. If frame #3 also has fewer than #1, then you are clearly moving farther out of focus. Moving closer to focus will always include all the initial stars selected from frame #1, since they will become brighter and easier to identify the closer we are to good focus. Use a fixed minimum star size of 4 to avoid finding hot pixel stars if the region is so star poor that at least 3 or 4 reasonably bright stars are not detected.

This scheme allows the routine to use the exact same stars in every frame for the most accurate and reliable HFR comparisons between frames. It also eliminates any need for the Minimum Star Size, other than to simply eliminate hot pixel stars with a fixed value of 4, since small stars are not being used in any way in calculating HFR. Larger, brighter stars are mandatory for calculating an HFR in the more out of focus frames.

Yes, the size is arbitrary. Right now I am inclined to increase it, but I hate adding settings unless we can show a real need for them. For now, since it seems folks are possibly seeing issues with this, I have increased the max star sizeā€¦ it is now logged if you want to see it, but still a function of min star size.

@jmacon

Thank you for the effort you have put into your responses, they clearly took a lot of time and effort.

I did read your post, but Iā€™m not entire sure why this matters with healthy sample sets. The law of averages will overwhelm bright data. If the star count is low, but not too low, we might be able to move the needle on the mean HFR in this manner and we can think about that. If the star count is too low, removal of stars could just as easily destroy the metric.

I understand your point here, but this is out of scope for 2.5.1. We really have to concentrate on what we are doing or we will never get anything done. Incremental deliveryā€¦ This increment is better detection with essentially the same AF engine making decisions.

The other issue with thisā€¦ as years of maintaining AF algorithms have taught us, is that it is nearly impossible to maintain any star correlation from frame to frame. Very dense star fields very often select the wrong star due to subtle shifts between frames and very long FL frames always seem to encounter a fairly significant movement of stars (large enough that stars are not where they used to be) due to mirror shift. It is a mess to sort out and really requires pattern recognition to do properly. Anyhow, when we get to the AF logic improvements, I think this can just as easily be achieved by number of stars found regardless of which specific stars they areā€¦

Ken, your points are all well taken, and I do agree with them, mostly.
Using the same set of stars is probably the ideal, but as you point out, hard to implement in practice without essentially a complete frame registration process.

I think the real crux of what would allow removal of the requirement for a custom Minimum Star Size is to totally remove this parameter from the star recognition logic, except in the very limited case of rejecting hot pixels. The end results will not be very sensitive to a certain modest degree of variation between frames of the actual set of stars used in the stats. My recommendation as presented clearly is looking for the perfect solution which is not necessary.

However the major issue at the moment is that the set of stars used between frames varies dramatically from one end of the frame group to the other. And this is true no matter what MinStarSize is chosen. Any given set will always have some optimum number. I am afraid this will vary significantly from target to target, which is a real disaster.

Do these 2 things and it will not need any MinStarSize set by user, and will work for most targets:

  1. remove MinStarSize parameter totally from the star recognition logic, except in the very limited case of rejecting hot pixels
  2. select only the brightest stars which allows it to work well in very out of focus regions.

Seems to me these are well within the current structure, being totally related to star selection logic.
My example proves that you get excellent results with just a single fairly bright star which can be detected in every frame. An even larger group should work even better, so long as the majority show up in all the frames.

While I would love to move forward with an optionless AF system, there are other factors to consider here:

  • Seems like this promotes an idea of an option to use an optionā€¦ might be confusing.
  • More importantly, we have always tried (not always succeeded) to make SGPro accessible for field machines that only cost a few hundred dollars. Raising the min star size significantly reduces the amount of work a machine needs to do in order to produce a result. Producing the maximum level of results and filtering though them for the best data is ideal, but not always realistic.

We are really using two sorting metrics at this point. We filter / sort by intensity then by size. Then the sorted list contributes to mean HFR (from the top down). The key here, it seems, is not really sorting, but identifying when to terminate any further contributions from the sorted list. Filters around median values or standard deviation work fine if the data set is large enough, but fail pretty dramatically as it decreases.

I will need to do some thinking on this stuff and see what comes out of itā€¦

Iā€™m not convinced that this is a major issue nor a disaster. I donā€™t think it matters if a different set of stars is selected for each frame just as long as the composite HRF values form a well-correlated asymptote. My experience is that I am getting remarkably well correlated ā€œwingsā€ regardless of the star set selected, as long as enough stars are selected. Based on what I have been reading here from other users, this seems to be their experience as well. I agree that selecting more stars at the extreme ends of the focus curve is a good idea, and Ken is addressing that by increasing the multiplicative factor for maximum star size. I look forward to that change.

Having said that, it appears that the new routine is approaching the ā€œainā€™t brokeā€ state and doesnā€™t appear to require fixing. At least for me, and my application represents the far end of the challenge with fairly long FL and large CO, It is working quite well even if very sparse star fields. I encourage the developers to not let perfect be the enemy of good.

Tim

Having multiple user specified settings tends to increase the support workload because of people needing support in the correct settings so having a way to avoid this would be good.

One thought is that the minimum theoretical star size can be determined from the focal ratio of the scope so knowing that would allow the minimum star size to be calculated, at least as an approximation.

Here are a few values

 F No.  Airy Disc size (um)
  2.0   2.5
  4.0   5.0
  6.0   7.5
  8.0  10.0
 10.0  12.4

The size in pixels can be found by dividing by the pixel size.
These may be much lower than can be achieved in practice.

I donā€™t see a reason why choosing the same stars would be important, after all stars are the same as far as focussing is concerned. I can see reasons for rejecting ones that are dim and ones that are saturated, in both cases because the size algorithm will be affected by the lack of data.

Chris

Ken, could the following focuser position behavior be causing a problem for the AF routine? Or is this the way the backlash is supposed to work?
[4/23/2016 10:14:00 PM] [DEBUG] [Camera Thread] Moving focuser to next position (18400)ā€¦
[4/23/2016 10:14:00 PM] [DEBUG] [Focuser Move Thread] Focuser moving to 18400
[4/23/2016 10:14:00 PM] [DEBUG] [Focuser Move Thread] Focuser backlash active, modified move to 18300

I have backlash set to OUTWARD with 100 steps. I would expect it to end up at the requested new position, 18400.

I donā€™t know what kind of interface would work best with users but here are some ideas:

The fundamental limits on min star size are the Airy disc size and I guess 2 pixels at the current pixel size of the image (not the unbinned size). Separately there is the seeing disk size - which is perhaps 2 arc-sec. Finally there is the smallest size star a person expects to get with given equipment at a given site - and they could enter that as perhaps 4" or whatever.

If you look at all the values above - the largest one will be the one that limits how small a star can be.

If you are imaging with a 50mm DSLR lens, the limit will be set by the 2-pixel size - and it could be huge in arc-sec.

So if you force people to enter the pixel size and focal length accurately - which I strongly encourage - the routine can figure out a lot all by itself. If in addition you either specify the userā€™s best expected fwhm - 1.5", 2", 4" or whatever - that would tell even more - and it would be independent of the optics and binning.

Or you could just enter a seeing rating of 1 to 10 from terrible to perfect. You could use a heuristic to map that to fwhm.

Once the min star size is found, the upper limit can be 10x that or something.

This would be a way for the user to enter things they know that should be entered anyway: the pixel size and focal length - and also let them enter something about the seeing conditions. Together that would tell a lot of what is needed to know how big the stars are expected to be in a focus run.

The exact numbers and range of sizes shouldnā€™t be critical - and it doesnā€™t appear to be with the current code - which seems to measure HFR pretty consistently for different stars.

Frank

This 2.5.1.7 is the version that is working best for focusing for me. The HFD /stars are consistent. I am not sure how min stars works but with a setting of 4 it seems to be working well. I am using bin 2x2 for focusing.

My prior posts were based on 2.5.1.6 and that code with min stars at either 2 or 4 forced a range extension past the initial 4 moves of 80 steps each, with the bad results I extensively documented above. I have since rerun the initial 4 moves of the run using 2.5.1.7 with max star 2 and 4, and it gives a very nice right side of the V, ie. looks like it would have run perfectly with the newer code. And the max extension point of that run was significantly out of focus, so I think 2.5.1.7 may do very well. Anxious to get my rig put back together again so I can give it a thorough shake.
Well done Ken.

REVISION: Here is the plot of the entire 14 point sequence, step size 80, run with 2.5.1.8:

This uses the default Min Star Size of 4. AMAZING! For this right side of the V, the ratio is 9 to 1 (16.79 to 1.8), fully three times larger than what was recommended with the older AF routine. This tells me that a starting focus position that is very badly out of focus is going to zoom right in on good focus.
I think it also implies that the default star size of 4 might work great across the board.
Ken, you nailed it here.

Hello,
variable AF results here with the last beta version, sometime good sometime not very good. Always the problem of multiple star detection and strange circle not centered on star or clearly far to the star. I just would be surprising that the AF will give very good results with such random star detection.
Here are two screenshots and the corresponding AFpack.
Thanks.
Pascal



https://we.tl/udkqhqehu0

Donā€™t worry about the drawing artifactsā€¦ just the results. They work most of the time, but, right now it is not an indicator that anything is wrong (one oddity is that running your AF pack here, I do not see the circles offset like thisā€¦ they appear to be where they should be).

Your comments are not too ā€œstupidā€. My response to you has always been that we canā€™t help much without data. To the best of my knowledge, this is first time you have provided anything for us to take a look atā€¦

Some observations:

  • 4 second exposures are probably not sufficient
  • You should use 9 data points
  • The primary issue: I am surprised by the amount of noise in your images. A 4 second exposure with a mean value of ~15,000 is not typical and is blurring the edges of your stars. The poor SNR if these images makes them really difficult to work with. The centroids are completely saturated and the edges are very close the background.

Ken,

With the recent improvements in the autofocus routine making it quite robust, I was wondering if a future version might include some additional focus quality metrics with the option to repeat the routine if the ā€œqualityā€ criteria was not met. I think you already check the final HFR is close to the predicted value from the intersection of the regression lines.

In evaluating the routine, Iā€™m looking for:

  1. Repeating the AF run returns a focus position that is within a certain number of steps (CFZ) of the previous result under the same conditions (temperature/filter etc)
  2. The slopes of the two regression lines are similar (obviously different sign though)
  3. There are at least four points on each regression line

I get #1 would take a lot of extra time however, for unattended imaging it would be easy to waste a lot of time if the AF routine didnā€™t select the best focus.

Just a thought!

Cheers,

Peter

Hello Ken,
thank you for your answer, I think the noise come from the sensor temperature, I have not activated the cooling, because I had too little time, sorry.
4sec exposure and 7 points worked very well for my with the old routine, but I can try to increase a bit these values.
The circles not centered are present all the time, not on all stars but very present, mystery ā€¦
Nearby stars detected as one star seems to be more problematic in my opinion, specially when few stars are visible (tested in cooled image but during daytime).
The old AF routine never produced such multiple star detection even on when sensor was cooled.
For the moment I reverted to the old AF routine, but I will try the new beta (with tec on this time) and let you know the result.
Thanks.
Pascal

I very much agree with the suggestions peter_4059 makes. They mesh well with ones I have made in the past for a minimum quality parameter, and a maximum allowed deviation from a known good position determined by temperature.

Probably the most notable deficiency in AF runs with the old AF routine was when it would make the decision to move out to the next higher range based on a very poor quality initial 3 or 4 points. If even one of those points was way off, it often would decide that it was on the left side of the V and then move further to the right, when in fact it was just a bad data point. This was most often with my RC12 which because of the 2000mm fl tends to have frequent inconsistent runs. I think the new routine is going to be able to get back to good focus after moving out (I have not been able to test the RC12 yet), but even so it is a waste of time for it to have to do this. The old routine never got back to good focus if this happened so I could only use it in attended mode.

The quality of each side of the V is already being calculated by the routine since it is a product of the the linear fit algorithm. I would suggest the final AF graph display the left and right side correlation factor for the run, and allow the user to set the minimum allowable value. This could be converted to a value between 0 and 10 for simplicity.

If the run quality was not acceptable, the routine could either rerun x times, and after failing x times return to the starting position. Even better would be to have the routine return to a position determined by the temperature profile for this set of hardware. That would require one additional temperature related settable parameter in addition to the one already supported which is the ā€œTemperature compensateā€. The new parameter would be the position that corresponds to 0 (or 50 or whatever) degrees, thus completely defining the temperature dependent formula. This would have the added advantage of allowing the program to immediately move the focuser to a good focus position at the start of the image session, rather than start at a bad position corresponding to the prior nights 20 degree lower temperature.

An additional benefit of the new temperature profile would be as a second (or only) quality check on the run. The user can specify a maximum step deviation of any run from the temperature profile calculated focus position. Failing this test would also trigger a rerun.

One more possibility: If a rerun is triggered by either of the above quality factors, the focus points could be averaged to produce an even better quality line fit, or better yet just the worst outliers on each run eliminated.

Both of these quality factors would be optional, indicated by a zero value for the parameter.

All good ideas. Start with making sure the data on the V curve is good.
It should have at least 4 point in a line on each side that is ruffly symmetrical.

It needs to screen outlying data . I have had HFR measures of zero record on the plot.
That data needs to be scrubbed out of the V.

Once you have V curve trained and recorded for a setup, you should not have to run the whole V curve again.
Most auto focus systems that I have seen work like Focusmax. A few measurements predict the minimal focus. This saves time too.

Temp/focus measurement is next level stuff to make the system smarter IMO. So I would think that is phase II feature. I would love SGP to self train a temp profile for a setup by logging temp along with each good focus runā€™s position. This could be done over a few nights or even multiple seasons.

You would eventually be able to predict focus position quite accurately with many setups. Also, this would build a very robust temperature competition model to use. I use temperature compensation now. I start my nights with a focus run then temp compensation does the rest.

I have been recording temp and focus positions since Feb. I can predict my focus point by looking at the temperature then checking my spread sheet.

I think the odds of this are pretty good (beyond 2.5.1)ā€¦

We already check this. If they are not similar, we alter the way in which we choose the AF point

This seems like itā€™s a user driven choice. Nine points will get this most of the time. Eleven points will increase the odds. The enemies of regression are big CFZ zones and small step size. Either of those can produce ā€œflat bottomsā€ and remove points from the regression.

I get a zig zigs where a one or two rejected measurements would have still given a good V result.
This could better addressed.

Also, I am suggesting a process similar to Focusmax.
If you have a good V curve model you can measure a few points to find out where you are at on the V. Then go to predicted focus.

Ken,

Thanks for clarifying that. It sounds like a lot of the work is already in place. I believe you currently give the user a lot of control over when the autofocus runs (which is great) and it would be nice to have a bit more control over whether the outcome is acceptable. I suspect some users will prefer to get the focus routine completed quickly and move on whereas others would prefer to take more time and get a more reliable result.

Regarding your final point in relation to the number of points on the regression line and flat bottom v-curves, is there any literature that explains why the curve flattens out in some cases? I tend to get this with my system but donā€™t really understand why or what to do about it. Is the recommendation to increase step size further? I guess this will result in v-curves with a greater range of HFR and tend to have fewer points in the flat bottom part of the curve however I think the down side will be the regression will be more heavily weighted by data points that are further away from the focus point. I guess this is ok if the slope of the regression line remains constant as you keep moving away from the focus point.

I also like maxmā€™s idea of having a v-curve model to fall back on.

In any case you have made a huge amount of progress on autofocus for central obstruction users - thanks for all your effort on this.

Cheers,

Peter