SGPro AF with Donuts Update

We have been spending some time on AF lately (when we can… most of our time is actually spent on support). Circle detection, in and of itself, is not a complex process, but the quality of the circles, as you can imagine, has a dramatic effect on the success of finding these circles. Furthermore, donuts appear when an image is out of focus and the further from focus you are, the less pronounced the star circles.

So… we are having some trouble with this. Lots of methods tried… none super great. Circular Hough Transform, generalized blob detection, custom methods. This sample is pretty typical of the results we have been seeing (hit or miss):

You can see that some stars are found and others are passed over completely. The general process:

  • Convert 16bpp to 8bpp
  • Convert 8pp grayscale image to a binary image
  • Run blob / hough transform on the binary image

Ultimately, it is the process of converting an image to binary that is troubling us. Dark / bias subtraction don’t really help here (they do a little). The binary images end up looking like this:

This obviously makes it very difficult to detect circles…

So… while we cannot accept code directly from folks, it is certainly permissible to offer suggestions or point to code published elsewhere. We spend a lot of time with hardware and automation, but image processing is not something we have a great deal of expertise with…

So… Any ideas on how to get a solid binary image where most of the noise junk is effectively stripped away, leaving just the star donuts? In terms of intensity, the noise artifacts are pretty close to that of the actual stars.

You might consider trying a simple median filter to pre-process the frame. It is fast, very effective at removing hot pixels and smoothing noise and also very good at edge protection.

Tim

This is just brainstorming…

We don’t care about where the stars are, just their size, so if the stars could be amalgamated in some way to give a plot of radius against probability it may be possible to extract the star data from the noise.

Maybe it’s possible to rework the circular Hough transform to do this.

There’s this http://staff.itee.uq.edu.au/lovell/aprs/dicta2003/pdf/0879.pdf which uses edge detection to get gradient vectors and uses the properties of the pairs of vectors across a circle. They say it’s resistant to noise.

I found that with a search on “fast circle detection”, there are a lot more papers about this and not all are behind paywalls.

Detecting edges and doing the CHT on the edges also seems useful.

Chris

I have no idea how the math works but some sort of wavelet processing seems like it could do the job.
The attached image has just had the bottom 3 layers turned off with MultiscaleMedianTransform in Pixinsight.

Clear Skies

Mick

If you were to apply aggressive noise reduction / hot pixel removal on the 8bpp grayscale image, it would seem like most of the junk in the image could be removed before the conversion to a binary image. Even if this removes many of the fainter stars, that should not be a problem. You only need a handful of the brightest stars in the image to be used. It would be up to the user to adjust auto focus frame exposures to insure there were a handful of bright stars in the image.

cm

I was thinking some kind of multiscale process too.

Is the plan to use the Hough transform for the entire sequence and abandon the HFR calculation?

I like the wavelet idea. In most image noise reduction efforts, you’re trying to preserve some relatively large-scale image structure that is close to the noise level. However, the goal you’re after is quite different. All you care about is isolating the stars, which are features that share nearly the exact same structure scale. In fact, making any other larger-scale (galaxies and nebulae) and smaller-scale structures (shot noise) completely go away would be ideal. This can be done with wavelet processing and there are many techniques to choose from. Isolating the scale you want and stripping off all other scales should not be too difficult, and I would think it would be very effective.

Tim

I’m not sure how you are going about this - but computers are so fast these days you can use some pretty brute force techniques and have plenty of time. I would not use bias or darks or convert to binary. I would just assess the overall image statistics in terms of background and sigma - and find star candidates throughout the image, starting with pixels above a given threshold. Use the high pixels as starting points for an exploration of connected pixels above a lower threshold. This will work for normal stars and donuts - and you don’t care or need to know because the hfd and other parameters will still work.

Once you find all “stars” you can rank them and use only the high ranked ones for autofocus. You don’t need to find all the stars perfectly for autofocus. If you have many stars you can afford to lose false negatives, but you want to make sure to reject false positives.

When you have the pixels for each star established and still in the full 16-bit analog form, you can do fwhm fitting and all kinds of things - including recognition of washer shapes vs. Moffat profiles. For autofocus near focus I prefer fwhm rather than HFD because HFD is very sensitive to the background level. HFD only makes sense to me far from focus with a FocusMax like routine - otherwise it will be very intensity dependent, which is bad for autofocus. The fwhm value should be much more consistent across star intensities. And I’m sure people would like seeing fwhm in arc-seconds.

For the ranking of stars you can include things like the expected min/max fwhm - as an additional rejection criterion.

For strong vignetting or bright luminosity you may need to do a form of “star extraction” to remove the low frequency background image info.

But all this does assume you can get the core routines fast enough to run quickly.

Frank

1 Like

You might have a look at this

Wavelet or other multi-scale techniques are efficient at separating these kind of structures.

I have no idea what method be will best to measure the doughnuts

It may be possible to use additional information about the optical system to estimate the relationship between star size and focus position - the slope of the V curve.

Given this it may be possible to estimate were the focus position should be with a small number of unfocused star size measurements, two or three, and use that to drive to the expected focus position, at least close enough that something like the current method will work. If fewer fields need to be checked then it’s acceptable to take longer.

Frank’s suggestion of starting at a pixel that is in a star and finding all the pixels that are connected seems a good one. It removes the need to know the shape of the star image, you continue until you are definitely outside the star, then from a plot of intensity against distance, get the star size. It should be possible to use the local background so avoiding problems with vignetting. This will have trouble with overlapping stars but all the same, a plot of star size against frequency should give something useful.

Chris

How about using a morphological opening operator to get rid of the small gunk? Have you tried the circular Hough transform on the actual raw image, or just the binarized version?

This may be worth looking at. Nice online book image on image processing. A lot of references here.

Also see the web site

1 Like

OK… thanks for all the tips and advice. Using wavelets now and it does indeed do a good job separating structures from the background. All in all, it’s pretty fast (maybe only a little slower than the current method so far…). I have about a hundred AF packs with donuts in them (from lots of different setups) so plenty to test with and we are seeing very good results right now.

This has the added bonus of filtering out defective columns / pixels as false stars with no additional processing… they simply do not meet the criteria that defines a circle.

This refactor of AF will be released in the first 2.5.1 beta (2.5.0 will be released first… soon). Along with this beta, we will also be removing the nebula rejection and sample size sliders (I do not believe these to be relevant with the new AF methodology). I am hoping that the AF method will even be resilient enough so that we can even avoid setting the minimum star size (in arcsec). Testing and field use will tell if that last statement is true.

These are different things… All we have changed is the way in which we detect star centroids and radii. The measurement of the star happens after its location and can be whatever we want… HFR, FWHM, whatever…

1 Like

Excellent!

Max

Can’t Wait!