Interesting idea. Some thoughts :
I would not want to make the assumption that having OAG1 in OT1 implies that that is the guider to use, or that that OAG port even has a guide camera attached. What if OT2 also has a guide camera somewhere ? Which one to use then ? I'd like to see the guide camera in the optical train explicitely listed as well. The choice which camera to use should still be with the user as we cannot deduce which one it is. Maybe the user wants to guide with the main camera of OT2. Also what if the guide camera is physically there but not to be used by INDI as PHD2 accesses it natively ? (this is what I actually use). Same for my SX-AO unit, it's there in the optical train but not to be accessed by INDI as PHD2 controls it natively.
Then on the idea of two imaging cameras, I like it and I see challenges like when to dither, both cameras need to wait for that to happen, and if 1 camera is waiting for the other to complete its sub it might have enough time to make another complete sub itself. Extrapolating to N cameras is cool, I agree we should design for N>=1 immedately when we leave N==1 where we are today.
I wonder what the purpose to INDI/EKOS is of something like a reducer in the optical train, it could be used to calculate the new focal length of course but then all spacing rings etc need to be added too ! This would be awesome to have of course.
OT-N support is very interesting, and it will be difficult to implement right without impacting N==1 stability which is already quite a challenge today In the end I think it will improve stability so I'm in
Great feedback folks! How do we break this into milestones? Perhaps:
1. Better equipment manager that includes DSLR lens and focal reducers (I'm working on this).
2. Optical Train Editor + Backend databasee.
3. Module settings manager?
4. Decouple state from GUI for each module?
good idea, but similar to what Eric and Wolfgang pointed out, I believe it can only be a first step: Take e.g. FlipFlats/FlipDark or a Flat position (light panel at a fixed Alt/Az position), a roll-on-roll-off roof or a dome. Then add constraints like certain positions that need to be avoided, a position the scope must be in to safely close the observation hut, an observatory horizon with a few trees or a weather station and imagine now scheduling an observation list over a few nights and reacting to changing weather conditions.
This seems like that hardcoding collaboration between modules as it is now will not be a good solution in the long run, as we cannot predict nor code every automation / interaction between modules our users will want to have.
The first question therefore is: How far will Ekos/Indi want to go? Does it want to be a solution, that is capable (in the long run), to run a remote observatory or not? If not, what exactly will be out of scope and how will it interface with software capable to do the out-of-scope stuff?
Let me assume in the following, that Ekos/indi will want to go quite far in this journey. How could we achieve this?
First step as Wolfgang said, will be refactoring the existing code to separate UI and state more clearly. For example I can only talk about the platesolving module right now, but this one really is orchestrating many of the other modules, like take picture, solve, slew to position, and doing all this in order to execute a polar align. All of this is done through code that reacts on events and tries to keep track of the current state in variables. Many event processing routines consist of lots and lots of if/then/else how to react to the event given current state. For me being new to the code base it is hard to reason about and is hard to change. It is also completely unclear, which events to expect when and in what order, with-out reading that other module‘s code. Here a Strategy and/or visitor pattern should be applied (different ones for Load&Slew, Take&Sync, PolarAlign, AimAfterMeridianFlip etc), which make the State and how to react on events explicit. This is a set of orchestrator classes for each module. This will hopefully also make it also easier to get contributors to the codebase.
As a parallel step, I believe a different way of collaboration between modules will be necessary: While currently this is some form of Choreography (looking at others, then doing the right thing), I‘d introduce an Orchestrator class (or set of classes) for the whole system (including the optical train), that is responsible to execute a single job. This should allow to reason about the whole system and avoid regressions during the refactoring of step one. Then as more optical trains come in, relax this and have independent but collaborating orchestrator classes for this. This hopefully will also get rid of some more recent bugs where this collaboration fails.
If we want to keep the choreography, as an alternative a Blackboard could be used, where all modules publish state, so that other modules can be aware of what‘s happening and can veto, vote or otherwise collaborate using some global state. (I don‘t have experience with such an architecture, so I don‘t know, if this really makes things easier or not). I believe this needed to be thread safe.
Then the next step could be to move that collaboration sooner or later into some DSL (domain specific language) or a rule engine, so that it’ll become configuration. Then the task is to increase the coverage of that, to avoid hardcoding it and open up for more customizations and support less likely combinations of products more easily.
One thing, that I‘d not change is using indi as an abstraction layer for instruments and as a means of separating compute.
I played with the new functions, only with sims because of cloudy night.
Here's what I have found :
The fields Pix, Fov, Fl and F/ in the Alignment tab seem wrong :
- Considering the focal length of 2800 with a 0.71 reducer the displayed FL should be 1980.
- With 3.8ù pixel size, the Pix field should be 0.39
- F/ should be 7.1
- The FOV should be 30.3'x22.7'
At least if I understand the concept
Once you have added an OT (with the "+"), you can't remove it with the "-" tab.
Could the problem with the 0x71 reducer be related to my french keyboard which refuses the period, only accepting the coma ?
(In French, the coma is used instead of the period and the period is used instead of the coma ... Don't ask )
At last for this fast first little test, one suggestion:
In the planetarium, instead of the INDI driver of the mount could you display the profile? C11 or C11@7.1 in my case ...
Now, let me dream a little bit
One day, EKOS will be able to deal with several mounts and several OTs.
You just change the OT in the mount tab, and a new crossair is created and depending on which mount is selected, you send the adhoc mount where you want. Changing the profile in the align tab or in the capture tab, you drive your scopes with one single computer, watching your scopes on the same kstars: eventually with one Raspi per mount addressed by its IP, the INDI servers being chained or addressed through a HUB.
Let's take advantage of this server/client architecture which is SO superior to these things you can see on lesser OSs
After restarting KStars, I can't get any optical trains to appear in any dropdowns. Only if I go into the optical trains editor, change something, only then do the dropdowns populate. But if I exit KStars and restart it, the dropdowns are empty again.
An expanded view of the optical trains. Ignore the "New Train", it was added by mistake and I can't get rid of it.
Please have the system remember autofocus focus settings with each train, for those with two or more motorized focusers that require totally different settings,
iOptron CEM120 EC2 and CEM25P
Celestron C11 EdgeHD and William Optics Star71
ASI 1600MM Pro, ASI 462MC
Moonlight Litecrawler (C11) and Motorfocus (WOStar71)
LodeStar X2 and ZWO OAG
Nextdome, AAG Cloudwatcher