If there is an RGB image, it will be possible to select which color channel to use for Star Extraction or Solving for all the methods of doing those. For SEP, I was able to just send it the part of the buffer I want it to look at. For External Solvers, this will actually be helpful since we can just export the part of the buffer we want to solve instead of the whole RGB image. That will reduce the writing of larger files. It also allowed me to simplify the code in several places because now I know that SEP and the solvers are actually ignoring any other channels except the first one you give them, so when I am down sampling and exporting, there is no need to include the other channels, just the one we want it to use.
As requested, when given an RGB image, the default channel to use will be Green instead of Red, which is what seems to be the default everywhere else. I agree though most telescopes will be best corrected in Green light. For reflectors it doesn't matter, but for refractors it really does.
One benefit to this is that when I make the photometry tool I want to make with StellarSolver, I will now have the ability to get the magnitude separately in each color channel which is very useful.
Good to know about your progress. Very encouraging indeed . I've been looking at the sources, so now I have more questions:
* I like the approach to use only monochromatic images. I've seen that you have limited the code, by now, to the individual channels (ColorChannel enum and the new m_ColorChannel property), but with the current the implementation it will be also posible to change the InternalExtractorSolver::getFloatBuffer and ExternalExtractorSolver::saveAsFITS to easily handle the "average" mode (in which the three color channels are averaged to generate a "Luminosity" channel). It's still not clear for me which approach will be the best: use the green channel or compute a synthetic luminosity one and work with it, and it's encouraging to see that the current implementation will allow to experiment with this also.
* How is StellarSolver linked with KStars?. Dynamically or statically? As I'm an Astroberry user, I've setup a Raspberry Pi to build KStars / StellarSolver. But it's a real pain to build KStars each time (it takes multiple hours)... do you have any trick to accelerate build times?
So I considered having an integration option as a last possibility and may still add that option, but doing that does require more memory and cpu since it would require creating a whole new buffer and iterating over all the pixels in the image to calculate all new pixels. This sounds like no big deal unless you consider the size of images and the numbers of pixels and the numbers of times the calculations would need to be done. Also integrating all three channels can easily cause a good image to saturate the stars. But again if it is an option you can choose and not the default that might be fine
I've been able to compile and run the new version on a Raspberry PI, but after source extraction or plate solving, I can't find any reading of the HFR to FHWD. I've assumed that was Stellarsolve who compute those magnitudes. I know that I must be missing something
So, KStars won't have the ability to use the new features yet, except that it will use the default in StellarSolver, which is now to use the Green Channel in RGB images. Also, note that these changes are in a branch, did you build from the branch? If you are in fact using the StellarSolverTester as I suggested, then you need to select that it should do the HFR in the drop down menu on the left side before doing star extraction. When you do the star extraction then, you will see the HFR next to the stars in the starlist.
Yes, don’t worry, I understand the process. I’m currently using your test program. I’ve only checked out your colourChannel branch and built the test program. I was playing with it. The detail I was missing was the HFR option in the combo. Thanks for this clue
I've been playing with some images and Stellarsolve test program and there is a thing that I can't understand. I've used the attached image. As you can see red/blue channels are complete different (red focused, blue out of focus), but when I use SEP w/HFR or External SExtractor w/HFR I get the same value for the HFR for each star on both channels. I expected very different results when I select red or blue channel and this is not the case. What I'm doing wrong? Because if Stellarsolve can't see any difference on HFR, the focuser algorithm will be unable to use those different channels to improve the focus point.
So the image you attached is not an RGB fits image. It is monochromatic. There are two data axes. Did you attach the wrong image or maybe merge the channels into one channel before attaching it here? Or maybe it is a bayered image and it has not been debayered?