rlancaste wrote: Jo,

This is certainly a good idea for the future, but right now unfortunately it might be kind of hard to implement. As I mentioned, it is slightly hard to say exactly which index file will solve any particular image. You can say that certain index files tend to work pretty well with a certain setup for sure, but you never know with certainty which exact one will solve it. So then the question would be, how would we determine which indexes to load? Do we load just the ones closest toe the scale we think it is? How much bigger or smaller do we go? Should that be somehow user configurable?

Also I would want to make sure that whichever way we do this for the internal solver, it will also work for the local solver as well. Right now, both the internal and external solvers are configured to just load all the indexes the given folder list. It is certainly possible to load just specific files for both the internal and external methods, but that would require a significant change to the code. Right now the only options are "load all the indexes" or don't do it and load them sequentially. I have considered a third option to have it automatically determine which one to do. This would be like a fourth option?

Another problem is that I'm not 100% sure that the RAM calculation that I have in there is perfect yet. It is supposed to shut off the "load all indexes in memory" option if there is not enough RAM. Before we would play with loading only certain indexes, I would want to be certain that that calculation works.



Rob,

Making it user configurable would be one option. What I did was just move all the index files the astrometry page in the solver options told me were optional (i.e. those marked with an asterisk, presumably this is based on the calculated FOV of my telescope (?)), into a subfolder so they would not be loaded. That made a TREMENDOUS difference in speed.

I was hoping that would be easy to implement, to the tune of adding another check box "Do not load optional index files for parallel solving" or something to that effect.

Although my Pi4 has 8 Gb of RAM plus another 8 Gb of Swap Space on an SSD, and the total size of the index files I have downloaded is 5.6 Gb, the solver consistently determines that there is not enough RAM to load them all into memory. However, just moving the largest sets (those >1Gb) into a subfolder where the solver can't see them, makes the solver happy and I am rewarded with a 5 fold speed increase.

The advantage of having this option of initially loading only the non-optional index files would eliminate the need for moving the optional files into a subfolder, thus keeping them accessible to the solver if parallel solving fails. It could then fall back to sequential solving, now making use of the entire data set.

I thought that was the way the code was structured anyway, but I understand that if that is not the case significant changes might be necessary which do not appear justified at the moment.

But please keep this in mind nevertheless. The difference in speed is indeed enormous and many of us are using small SBCs with limited RAM to run their rigs. That means parallel solving might always necessitate manually removing optional index files - and moving them back in those special cases when the solver fails. That just feels a bit klutzy.

But, by all means, thanks again for building this fantastic solver!

Jo

Read More...