×

INDI Library v2.0.7 is Released (01 Apr 2024)

Bi-monthly release with minor bug fixes and improvements

Re:Post-run performance monitoring vision for logging

  • Posts: 398
  • Thank you received: 117
Tried to solve this in a smaller, private email group without success. Maybe the larger group needs to weigh in and help decide. We need consensus for how we want to approach performance monitoring/analysis when parsing log data is the only method available. A perfect example of this is post-run focus analysis (although guiding, pointing, and platesolving are among other examples that might apply).

Using a focus example, an issue recently exposed is that the linear focus algorithm has implemented inconsistent intermediate products logging as compared with the other 2 pre-existing algorithms. In the iterative and poly algs, each separate image fetch reports an HFR and focuser position at INFO log level. This is nice for real-time display, but less ideal for post-run analysis. In contrast, the linear algorithm reports no intermediate HFR/position results, but rather a final summary array of combined intermediate results. Not quite as concise for the real-time display, but very convenient for post-run analysis (plotting). Awkwardly, the Linear alg logs this information at DEBUG level instead of INFO level. So at INFO level, there are no intermediate results visible for Linear.

So, I, as a user, am trying to analyze focus performance post-run. I'm stuck with bad choices. Create scripts to parse prior algorithm intermediate results and create a post-processing result, or turn on DEBUG level, use Linear, and get the full result ready for post-processing.

I suggest we should strive for consistent data products in cases like this. More importantly, I think developers should be thinking about how data they log might be used. If there's a possibility that data might be post-processed, I'll suggest this data should be logged at INFO level. This saves users from having to turn on and wade through DEBUG to get analysis products. DEBUG should be reserved for developers to use for development debugging. In an ideal world, simple intermediate logging (for displays) should be additive to a summary log entry when post-run analysis might apply. In any case, users should not have to turn on DEBUG to see data that would be obvious post-processing fodder (like focus). Now, we need some comments please..... Thanks!
The following user(s) said Thank You: Eric
4 years 2 days ago #53376

Please Log in or Create an account to join the conversation.

  • Posts: 1029
  • Thank you received: 301
That's a very good point. However, I'll disagree on one thing: you can't ask a developer to write log output suitable for a requirement when there is actually no requirement for the developer to write it. Logs are currently a way for developers of features to debug post-publication. This might seem a shame, but that's what it is. Logs are just enough for the original developers to understand how their feature is behaving.

Now, I'm not bringing added value here, obviously :) So let's imagine how we could improve the situation.

Recently, we restructured the star detection mechanism so that the focus module would use a generic detection interface. I suggest we don't produce usable logs from the specializations. I'm not saying those specializations should not output logs, just that logs usable for statistics should not source from the specializations.

I believe we should restructure the focus algorithm in the same way, with a generic interface and specializations for each algorithm. Which, on the subject of logs, would push usable logs even further on the generic side as I mentioned specializations should not care to output usable logs in that context.

In order to measure speed, we need information on the time it took for the algorithm to achieve focus.
In order to measure accuracy, we need information on the quality the algorithm achieved.
In order to measure stability, we need information on the variance of the two previous indicators.

About the verbosity level, you mention you saw two types of logs, one that enumerated each step of the procedure, and one only the summary. I think we might need the first to debug, so because I just said "debug", those should be DEBUG. I believe we will need summaries for performance monitoring, so because I just said "monitoring", this should be INFO.

But we still need to define the actual data that it would be appropriate to output based on a tool that would analyze the results (requirement driven development). Additionally, we need to define a structural format for those logs. These points are open for discussion.

-Eric
The following user(s) said Thank You: Jasem Mutlaq, Wouter van Reeven, Doug S
4 years 1 day ago #53400

Please Log in or Create an account to join the conversation.

I've discussed with Hy this topic as well, and suggested that a separate file should be used for such algorithm-analysis reasons instead of dumping it in the logs. The separate file would be clean, and you have all the freedom on what data to add there and in what format, you're not restricted to the logging rules. This is already done for guiding btw, and it could be done for focusing as well.
4 years 1 day ago #53414

Please Log in or Create an account to join the conversation.

Time to create page: 0.234 seconds