×

INDI Library v2.0.6 is Released (02 Feb 2024)

Bi-monthly release with minor bug fixes and improvements

Image Autoguiding

  • Posts: 200
  • Thank you received: 57

Replied by Paweł on topic Image Autoguiding

Here is a primitive proof of concept on real astronomical image (1024x1024, green filter image of m13, quite dark).
The inside with 100px margin is taken as reference and compared with a frame shifted by x=3px y=4px
Attached is a map of the phase shift in the low spatial frequency region. A nice flat ramp of the phase shift from the indicates the direction of the frame shift.
This is just a proof of concept calculation but it shows that it could be calculated very easily and is indeed very sensitive. For one pixel shift we still have very strong and clear gradient in the picture.
The following user(s) said Thank You: Jasem Mutlaq, nMAC, Vincent Groenewold
7 years 5 days ago #15588
Attachments:

Please Log in or Create an account to join the conversation.

Replied by Jasem Mutlaq on topic Image Autoguiding

I'm curious to see how _noise_ affects all of this? Turbulence..etc?
7 years 5 days ago #15593

Please Log in or Create an account to join the conversation.

  • Posts: 200
  • Thank you received: 57

Replied by Paweł on topic Image Autoguiding

I expect it will mess things at higher spatial frequencies which we cannot use anyway due to the wrap-around of the phase. But ultimately we will see when we implement the thing with live feed. I am thinking about implementing a simple testing code outside of indi for RPi camera - just to test things out. I will probably have a bit of time to work on this later next week. If you manage to cook some skeleton by then that will be great. If not, I will start working anyway - just outside the system with integration in mind.
7 years 5 days ago #15598

Please Log in or Create an account to join the conversation.

  • Posts: 365
  • Thank you received: 32
Just in between, I think this is so awesome to follow! Thanks a lot guys.
7 years 5 days ago #15601

Please Log in or Create an account to join the conversation.

  • Posts: 1309
  • Thank you received: 226

Replied by Andrew on topic Image Autoguiding

Is there some kind of statistical analysis in place to prevent outliers from triggering excessive corrections? For example If a corrupted image frame were to come in, as can occur in my experience with ASI120. How would the algorithm respond to that?
7 years 4 days ago #15622

Please Log in or Create an account to join the conversation.

Replied by Jasem Mutlaq on topic Image Autoguiding

I believe this is beyond the scope of the phase shift algorithm and within the scope of the PID controller that uses these data. For example, Ekos internal guider would ignore transient spikes for this very reason.
7 years 4 days ago #15623

Please Log in or Create an account to join the conversation.

  • Posts: 2876
  • Thank you received: 809

Replied by Rob Lancaster on topic Image Autoguiding

I was just looking over his paper and code. I definitely like it. I think it could make guiding more accurate and easier to do. One concern I have though is the number of calculations that have to take place for each guiding image shift. I saw that he used floats, not doubles, which helps. But wow, there are a lot of steps to get the shift involving 3D arrays and of course FFT calculations. Could this cause problems for some less powerful systems in terms of both memory and processing speed?
Last edit: 7 years 4 days ago by Rob Lancaster.
7 years 4 days ago #15626

Please Log in or Create an account to join the conversation.

  • Posts: 200
  • Thank you received: 57

Replied by Paweł on topic Image Autoguiding

There is a substantial amount of calculation involved but I think it is manageable. Implemented in python (interpreted language) and working on 1M pixel image it took 2s per frame on single 2.4GHz core without any optimisations. The real algorithm uses 5-6 smaller transforms (256x256) and we will write it as tight and optimized as possible. Furthermore it is going to run on the main control computer not the embedded platform (e.g RPi). If we need to, we can use graphics card FFT to speed things up quite a bit (approx 10x). There is a GPUFFT library for RPi which runs at 7ms / 256x256 single precision transform (See www.raspberrypi.org/blog/accelerating-fo...forms-using-the-gpu/) so we can probably run the algorithm at 10-30 fps even on the RPi2.
Even using FFTW on RPi (first gen! 700MHz) we can get to 100ms/transform - thus 1-2fps frame rate. I think this is quite enough for our purpose.
I am not very concerned with the speed of FFT - the name is *Fast* Fourier Transform ... ;)
The following user(s) said Thank You: Jasem Mutlaq, Vincent Groenewold, Bill
Last edit: 7 years 4 days ago by Paweł.
7 years 4 days ago #15629

Please Log in or Create an account to join the conversation.

  • Posts: 2876
  • Thank you received: 809

Replied by Rob Lancaster on topic Image Autoguiding

2s per frame on a good computer running a test is a long time, considering all the other stuff that the computer must be doing for imaging and considering you want to guide with a rate faster than once per second. Also remember that many willl be using a powerful computer, but one of the things that makes KStars/Ekos so powerful is that you can now run the entire thing off just a raspberry pi 3 with no external computer running.

That being said, you are very correct about what you said about opimization and about using a compiled rather than an interpreted language, etc. I bet those times will improve. I particularly like what you said about using the graphics card and the fft library for the calculations to speed it up. If we can get the algorithm to solve at 30 frames per second on a raspberry pi, I think there is no problem at all.

Also, as long as we keep the option to guide with a star, I dont think we need to worry too much about it not working fast enough for all computers right away.
7 years 4 days ago #15633

Please Log in or Create an account to join the conversation.

  • Posts: 365
  • Thank you received: 32
I'm guessing it's not going to use 100% of the CPU for 2 sec, so that wouldn't affect the rest too much. This method, as far as I read, is also not meant for the Pi perse, if it would work it would be nice, but it's not the goal I believe. And why would you want to guide much faster than 2 seconds? I always guide at 3, lower and I'm just chasing the seeing.
7 years 3 days ago #15635

Please Log in or Create an account to join the conversation.

  • Posts: 200
  • Thank you received: 57

Replied by Paweł on topic Image Autoguiding

I just tested 2d GPU FFT on my RPi2 and it ran at 6ms/(256x256)block. We need 6 such blocks + some additions/divisions that is below 50ms per frame or 20fps. This is close to live video frame rate. We do not need anything like this. The performance is limited by the INDI protocol not by the processing stage. INDI can hardly do few FPS just displaying the frames. This will not be the bottleneck of the pipeline.
7 years 3 days ago #15636

Please Log in or Create an account to join the conversation.

  • Posts: 200
  • Thank you received: 57

Replied by Paweł on topic Image Autoguiding

If we would use my quick and dirty test procedure in single-core interpreted language it would use 100% CPU and run like molasses ;) But we are not going to. This test was rather to check my understanding of the algorithm and to approximate the *upper* bound of the processing time. I thing that the lower bound is around 50ms/frame (or 20fps) on RPi2 using GPUFFT. This would be a desperate measure since it needs root access and is not very portable. My plan is to use FFTW3 which is a well-optimized, portable FFT library. With this approach I hope for 2-5 fps on RPi2 and much better on regular PC.
Last edit: 7 years 3 days ago by Paweł.
7 years 3 days ago #15638

Please Log in or Create an account to join the conversation.

Time to create page: 0.316 seconds