Half flux, automatic star recognition with source code


I came across this web site when trying to understand more about HFR.
It has a very in depth discussion on star detection and measurements including half flux and FWHM.

It is a extensive full series on the subject of star recognition and measurement with public C++ source code and sample images by a fellow amateur imager.

His code appears to be measuring half flux diameters on out of focus stars from a full frame rather nicely.
I believe the scope used is has central obstruction ( reflector).

This web site is definitely worth a look by the our developers.
There could be an answer to problem of obstructed scope autofocus published here.


Sorry, I now see the image he posts shows in focus images.

Nevertheless, someone should review the source code published to see if it is helpful with current autofocus limitations on scopes with COs.



I though this would be the hot topic of the day.


Sorry. Didn’t mean to disappoint you. These are known articles.



I thought there could some goodies from the source code. I am not a programmer so this out of my league. So rare to see someone post the code like that.


Ya… Definitely some good knowledge in the articles. Our problem has less
to do with math and more to do with how to “fake” resonable results over
the whole screen. We don’t have the luxury of waiting 5-10 sec per frame
so we are still experimenting with stuff.


I would hope that there is some way to improve measuring out of focus star images.

Btw, My FLI 16803 at 1 x 1 bin takes about 15-20 seconds to download the frame in high speed mode.
My STL 11000 takes even longer. So a typical focusing run is very slow already.
I am more concerned with accurate measurements than speed.


Put me in that camp. Auto focus works great for me with Hyperstar when the stars are small and plentiful. But with the camera at the back of the scope, it is useless. I would happily give up an extra half a minute or more of imaging time for each focus run to get reliable focus automatically. It sure beats bundling up, riding up to the observatory (half a mile away), slewing to a bright star and using the Bahtinov mask before recentering on the target.

Maybe a user-selectable “High Precision Focus” option that if selected would invoke the more cycle-intensive routine, and default to the current method if not selected.



Better to take a little longer and get good results than fail. Probably quicker overall.


For me focusing an sct, the main cause for a bad focus curve with sgp is the occasional inclusion of extremely small features that aren’t stars and are unrealistically small. I have entered a feature request to measure the HFD in arc-seconds and reject stars smaller than a given minimum, in arc-seconds.

The second issue is that the HFD is not a consistent value for different stars in an image. I prefer to use fwhm for focus curves but I don’t have Pinpoint. Many people consider HFD to be more robust - but I don’t find that to be true. The inconsistency of HFD has an impact on the focus curve when the stars used for the calculation changes. This will result in an abrupt jump in the overall HFD driven mainly because different stars have different hfd values in a given image - and changing the stars used will therefor change the overall hfd.

If I take a focus curve with sct and I stay close to the focus and avoid donuts, then the focus works very well - with smart focus disabled - but only as long as the above two things don’t happen: a tiny small feature being counted as a star and pulling the curve down, or a sudden change in the stars used for the calculation - which can pull it up or down.

For me a focus run takes 2-3 minutes approximately due to 2x backlash unwinding and 2s exposures - 9 of them. But the time has little impact when focusing about every hour. Sharp focus is a priority.



Ya… I think that is accepted somewhere. I think I may have put it into 2.5.X? Trying to think through a UI where deriving this limit is not confusing. While the current “sample size” slider is less useful than this, it is also way easier to understand.

I’m afraid purchasing Pinpoint will be your only way at this… we have no intent of adding an option for (SGPro provided) FWHM metric right now (and if we did it would be a long way out).


I got green lines for the first time last night. The trick was to move the star slider down to about 30%.

It eliminated small stars that were throwing the calculations way off. Works pretty good now with both FWHM and HFR methods. Angles on each slide were slightly different but it was crossing at the low point.
By the way, I never have gone out enough to see doughnuts even before. So my problems were not do to CO.


I agree a method that tosses out the smaller stars is a good idea. Could this be based the on pixel size and focal length?
I could be a automated then.


I think everyone should have focal length and pixsize set properly for a session - and if they don’t then min hfd could be one pixel or something as a default - in other words no change from current behavior.

If people change binning it is important that these limits be set in arc-seconds.

An alternative approach to removing bad outliers would be to use a median/sigma rejection heuristic. I think in most cases the bad big and small stars would be recognized as outliers in a histogram so they could be removed automatically. This would avoid having the user set limits and doing things in arc-seconds - but I would like arc-seconds anyway.

It might need setting a rejection criterion for the n-sigma rejection limit - but a fixed default might also work.




Agreed, if pixel scale is not available use statistics to reject outliers, however as soon as the user successfully solve an image the pixel scale is accurately known, no need for the user to enter anything if blind solve is performed. Most if not all SGP users rely on plate solvers and the information provided by solving an image could be use not just for centering the mount.




Yes - I think there are ways to do this - but I realize the UI implications aren’t trivial, as Ken alluded, and I don’t know what scope is involved. There is a chance it could cause failures due to a setting being wrong or something. But for some of us the autofocus is very close to working well with sct except for some issues - so I’m just providing possible ways around them.

I see talk of fairly involved things like defining autofocus regions for an object - and my feeling is that if the core behavior were changed to be more robust then it could make features like that no longer needed and keep things simpler.



Not saying you’re wrong, just noting that there may be issues where equipment based ROI focus would be useful… for instance, at least a few folks have noted that they have not “quite flat” fields and wish average star metrics from one area, but not another. I am guessing both areas would meet HFR rejection criteria as outlined above.


+1. I too find that SGP seems to focus on a lot of tiny stars which tend to shift a great deal from frame to frame. Perhaps having a low cutoff would improve frame-to-frame star selection.


So based on what you’re seeing, is your focus significantly worse than if you used a bhatinov mask or another autofocus program?

I think you guys are hunting a non issue.


Actually I’m having a really hard time getting good autofocus on my short SV60EDS refractor. I’ve often puzzled why when I watch autofocus the stars picked for measurement varies so much between frames, and it seems to mostly be the really tiny stars that blink on and off from one frame to the next. That’s why I was wondering, as other have, if a low star size cutoff might help. Haven’t tried a Bhatinov mask though.