Thank you for taking this in consideration.
1 - this calculation has to do with making sure the mean background ADU is enough to swamp the read noise of the camera, such that each subframe can have an exposure length as close as possible to optimal, for your given camera/telescope/sky condition. From various formulas I came across, the swamping factor can be anywhere from 5*RN, 10*RN, 5*RN2, 10*RN2 - where RN is the read noise of the camera, calculated at the ISO / gain you intend to use it at, for your imaging session. 900 ADU was just an arbitrary number that I chose as an example, but it would have to be a value inputted by the user, since every camera has a different read noise and every user is probably going to like a different swamping factor, according to his opinion, theories, studies in the matter, etc.
I image with a D5300, so in my case, my optimal swamping factor would correspond to >800 ADU - this is before BIAS / offset calibration, so it perfectly corresponds to the number given by FIT Viewer, when it loads the picture that you just took with the Capture tool.
My workflow goes usually like this: I take some test exposures starting with an exposure time I typically know gives me >800 ADU, read the mean given by FIT Viewer, adjust the exposure as necessary and start a sequence of say 10-20 images. I am by my rig - so no automation, yet - and I keep watching the images in FIT Viewer, my guiding graph, etc. Sometimes I notice, as the object rises in darker parts of the sky, that the mean ADU falls below my optimal one, since the sky is darker. Therefore I stop the sequence, adjust exposure, and start another 10-20 images. I adjust as necessary until I am ready to call it a night. For the sake of future automations, it would be nice if the Capture / Sequencing tools could check the mean ADU inputted by the users as the chosen optimal one, compare it to the one in FIT Viewer and adjust exposure if it falls out of a tolerance of say +/- 50 ADU from the given one. To make things easier, only round number of seconds should be used (example, increments of 5 - 10 seconds). Also, I don't take darks, as I use a DSLR and I don't have control over the temperature. I also notice that calibrating with master bias is good enough. I am sure people that use darks and have dark libraries, won't probably benefit or even use this feature, since they would have to make suitable darks for each group of subexposures within a given session and calibrate them accordingly. So, I understand if this feature won't be implemented.
2 - this however I would really love to have. I know right know KStars displays at least two field of views on the sky map: the solver field of view and the field of view centered around the cross-hair as you move the mouse around to check for other targets. But it would really be nice if it overlayed the first field of view from Load and Slew and keep it there as a reference - without overwriting it with the following solver one. Then as the images coming from the camera are solved, as the mount keeps moving and getting closer within tolerance to the RA/DEC coordinates of the Load and Slew picture, I could also see how far off the parallelism between the new field of view of the camera (given by the current, most likely wrong rotation) and the old field of view from the previous session is. Once I get the rotation close enough (by keeping hitting Sync on Astrometry, and have it display new field of views as I rotate the camera), the user could turn off the original field of view to remove clutter on the screen. A fantastic bonus would be if the program would calculate by how many degrees the new rotation is off from the original one. In this way the user would have a number to judge how good the new rotation is, instead of just trusting his eyes with the parallelism of the sensor edges.
Thanks again, and I will understand if the first one won't be implemented, but I would really, really appreciate the second one. I change camera rotation often and it's almost impossible to get it right within sessions. For example, a few nights ago I tried capturing more data on the Heart Nebula, and I thought the new rotation was close enough to the old one, but once I checked the images on the computer they were quite off (4-5°) and it will mean having to crop the edges where the images don't completely overlap. If the target is small and there's a lot of unneeded black sky around it, it's not a problem, but if the nebula is quite wide, a few degrees can mean cropping the object itself.
Thanks for all the hard work on this suite, I absolutely love it so far!
Matteo