Ronald Scotti created a new topic ' Slow mount response' in the forum. 3 days ago

I am running Kstars 3.6.8 on an Rpi4 with an Ubuntu 22.04 OS. I have a CGEM HC mount, PA and GoTo's are fine. However, I usually need to tweak the mount position (based on Plate solve) to center the target exactly on my sensor. I do this by running the focus routine while I have the "Mount Control" window open. I find that the "Mount Control" module significantly lags when I try to make adjustments. There is a delay of a couple of seconds between when I indicate a motion in a particular direction and when that direction arrow turns red (which indicates the motion has been either done or initiated). I don't know if there is a setting (polling or something) that I am missing or if this is an issue. I understand there is a delay in taking the image and presenting the result, but this seems more to be a delay associated between the Motion Control window and the mount. Also, It would be "wonderful" if the arrows could be indicated (by numbers or something) on the Kstars display, so you knew which direction that a particular arrow would actually move the mount in the global picture.

If anyone has any suggestions or comments about this I would appreciate their response.



Thank you for the file for the SBIG I will load it up next time I start up the equipment.


Thank you this is all very helpful. I do use a gain of 105 for the ASI533, to be just above the drop as you indicate (I could bump it to 110). I would not want to lower it.

On a good night of seeing I can guide for 120 seconds and achieve round stars, so those numbers are all very doable. Of course, the problem with the longer exposures is that you blow out the stars, so I also shoot shorter exposures. I am using Siril as my processing tool (for now) and it allows for star extraction and then recombination after stretch. I am trying to process with the longer exposures on the dim objects and recombining with stars from a shorter exposure, to see how that works.

I think this tool does give a very good starting point for setting your exposure time. I will have to take some data for my SBIG 8300MM camera to be able to use it with this tool, as I don't see it in the list. I will read back thru this thread to see what data you need for this camera.

It still seems to me that you have exposures, as you are taking them, that contain the information of the target levels and the sky noise levels of the background (as best as you could extract them from the data). And the current processing programs, like Siril (I don't use Pixinsight ) or even a photometry program like Fits Liberator give you access to those values (again if you can interpret them correctly). So that with this information you could make a more informed estimate of what exposure to use and then your calculations would tell you how many you need to take to achieve a certain SNR. Even something simple like knowing how many stars are saturated in an image would help, as you could say I would like less than 5% of the stars saturated (only the very brightest ones). That might be something you can get from the Fits Viewer in Kstars, I am not sure.

thanks again for your contribution.


Sorry, I got off on a bit of a rant on the last post and I really did not mean to, I apologize.

When I look up my location on a Dark Skies Map it says that I am 21.77 sqm, looking vertical with no moon, etc. So I am fortunate that I am in a Bortle 3 location. I use your calculator for my ASI 533 camera and it gives me very reasonable numbers for exposure times (Its dark so I can go long) and very achievable exposure counts, only tens at long exposure. But my equipment does not guide well enough for a 300 second exposure. So how should I use your numbers? Should I calculate the total exposure time (exposure x number) and use that as a goal with my shorter exposures? I do not see anyway to put my own exposure time in and have it give me how many images I should capture. In my actual experience I can see a definite difference in background level between 120 sec and 180 sec exposure (this being brighter). So I am inclined to stay with the shorter exposure and just take more images.

Another, simple, question. What filter bandwidth should I use for an OCS camera? Is it the full bandwidth covered by all RGB or that of each individual color (that is what I would assume).

I need to watch your video again.



Thank you for your response.

I have to admit that I am an impatient Astrophotographer. If the night is clear and the equipment is all working well, I will start with short exposures of 10 seconds (depending on the target), then 20, 30 60 120 and even 180 seconds. Usually my guiding starts to break down somewhere between 1 to 3 minutes, so I am limited there by my equipment. What I find is that for long exposures I can begin to 'see' the object (Nebula or galaxy) and so I start to take more of those exposures. But what I sometimes discover later is that the sky noise was to high or the stars are blown out and so I have a difficult time processing the data as I have to blend multiple exposures.

So optimizing the exposure time is important and knowing how many subs you will need to take to bring the object out the noise clearly. But, I am not the inclined to spend many multiple hours on a target. Usually what I can get in one evening is going to have to be good enough. I think more of us fall into that category than those who have permeant observatories and can spend multiple nights on a target.

I am trying to do the best I can in a limited time and I am not sure that fits very well with Dr. Glovers approach. But I believe his math. The capability of image processing software is in a constant state of flux. I have only just used the StarNet Star subtraction capability in Siril, so that I can stretch a starless image.

I guess what I am saying is that I understand it is complicated by all these factors. I am looking for something that guides me at the moment, with the object in view, that says this may be a way to proceed. A dogmatic approach that says that 300 ten second images on target will get you an image with a SNR that exceeds a certain value just does not seem to help. First, that is 5 hours of image collection and second if I cannot get those 5 hours have I just wasted my time.

I am not trying to take pictures for APOD, I am just trying to enjoy the hobby with results that I am proud of and amazed that I can produce an image of something that far away.

Sorry, for the digression. I appreciate what your exposure calculator is providing us with and I will have to spend more time with it.

I had a good night and collected a bunch of data. This is the result of simple processing of a half hour of 60 second exposures (I have more images at longer and shorter exposures but I had the most at this exposure)

I will spend some more time with the data to see if I can clean it up better. But I am struggling to decide where the most effort should be applied.

Thanks again


My installation of Kstars 3.6.8 continues to exhibit the same behavior when I try to initiate a new profile of equipment. All the equipment loads and contains all the correct information, however Ekos never requests that I set up an Optical Train and if I proceed to the focus module there are no optical trains to select. If I try to edit the optical train, there are none so I select to add one and then Kstars crashes.

I am attaching a log, but it does not show much, it just stops after the cameras are shown to be online.

File Attachment:

File Name: log_21-18-36crash.txt
File Size: 59 KB

As I mentioned earlier the work around is just to modify and old existing equipment profile and then everything works, but I am concerned that this indicates a bug in this version (3.6.8 ) somewhere.

Has anyone else seen this behavior? I do not want to upgrade to 3.6.9 until I find out that what is happening in my instance is understood.


Sorry if this seems a bit naïve, as I think I am trying to make too much of this 'new' capability in Ekos. I have not used the sub-exposure calculation tool , yet, as I just upgraded to Kstars 3.6.8. But I have watched Dr. Glover's presentation (in the past) and I understand pretty well how you are going about implementing the tool.

Might there be a more empirical way to establish a good sub-exposure time. (Many suggest just pick 20-30 seconds and be done with it for the usual situations). But I am thinking along a different approach. In the current Siril release, it uses a connection to "StarNet" to remove stars from an image. Could you take a series of short exposures (1,5,10,20 seconds - so most guided results are reasonable), subtract the stars, calculate the background noise level (subtracting the other sources of noise in the specific camera) and plot a graph of exposure time vs noise (there are probably many ways to do this). Would this information help to ground your calculations with some actual data (could the two approaches be made to agree thru an adjustable parameter) so that you would not have to rely on what you think the SQM value is for the current session?

I know in the past I have used a method of taking broad sky images with digital camera pointing straight up and measuring the background noise from the dark areas in images. I think this was for coming up with sky brightness values to give an estimate of the Bortle value of your site. But I don't remember the specifics, anymore.

Anyway just a suggestion and a question of whether this would be practical and useful?


One possible addition to Kstars/Ekos would be to develop a process for determining the atmospheric 'seeing' for a night. I know that to actual measure the seeing is a complex process and requires special equipment. But I am wondering between the focus module and the guide module if there isn't some way to determine the limit of the current seeing and its stability. For example, would repeated measurements of the HFR's of stars in the focus module allow you to plot a distribution, in time; and from that make some statement about the current status of seeing (given you have all the information about the optical train).

We are always at the mercy of 'seeing' and trying to determine whether it is worth continuing to image or just call it a night might be helpful. Most of the time we just continue on and will not know how bad the night was until we see the final results. The other piece of information that is showing up in discussions as relevant is the wind data, at some high altitude, in the direction of a particular target. The information that is currently available includes; our location, the target direction (RA and DEC). If the program could access a weather information site (like Windy) and provide the high level winds in that direction, that might give us some indication of the kind of 'seeing' we might expect for the night.

I am just thinking outside the box here for what might be added to the program that could be useful for the users.


No worries, I am not familiar with the Sky Adventure GTI mount, but that is only because I have never used one. If there are appropriate drivers in Indi (or ASCOM) then it should work. In principal it is a Equatorial mount and it should behave much the same as my Celestron CGEM. The camera specs are what they are and I think the dedicated camera is still the way to go. I have an Olympus 4/3rds camera that I love for daytime images, but I did not find it satisfactory for astrophotography. I never did ask (or you never said) what it is you are trying to image, but a dedicated camera with lens and the lite mount should be a good mobile setup.

I have an old Tamron telephoto lens that I bought an adapter for my ZWO camera that I intended to use for wide field. But in the end I bought an AP65 EDQ refractor that performs much better than the Tamron.

I also have spent many hours getting my system working the way I want it to, but then I have plenty of Cloudy Nights where I live here in Eastern NC to do so. I first make sure I can get the system to perform in my garage (where it is warm) and work out all the issues. so I don't get frustrated trying to do that under the night sky. If you have not already you should check out the "Cloudy Nights Forum," where there are many discussions on all kinds of Astrophotography, from mobile setups to full domed observatories.

Best of luck in your adventure.


Yes, I do occasionally get hangs, but usually not until I have completed the Polar Alignment (PA) as I roll my scope out every night. Once I have completed that then I can just shutdown and reboot the Rpi (if Kstars hangs) and not loose my place in imaging, Kstars will come back up in the last configuration. I have had an issue with the latest version (3.6.8 ) that I cannot create a "NEW" profile consisting of optical train, but I worked around this by using an older equipment profile and then modifying the optical train once it loaded. This seems to work, so I have not researched what the problem is.

My CCD camera (SBIG) is much slower in downloading images, it may take it up to 2 second to download its 6 Mpx image, That is just the nature of a CCD chip. So I have never tried video with it. My ZWO ASI cameras (the ASI120MC and ASI533MC) are CMOS chips and download much faster (a fraction of a second). But they too have a frame-per-second (fps) limit based on how deep of a 'well' you are collecting (10 bit or 12 bit) and how large of an image you are collecting; full frame size for your ASI183 is 5496x3972 and at 12 bit its maximum frame rate is listed as 19 fps. If you reduce the well depth (this is set able in the Indi control panel or in other software such as Firecapture) to 10 bits and you reduce the size of the image by collecting only a Region Of Interest (ROI) such as 640x480, then the frame rate can go up to 180 fps.

When I am imaging Deep Sky Objects (DSO's) that have low 'brightness' my exposure times are in tens of seconds (typically anywhere from 5 sec up to 180 sec depending on how well the mount is behaving in guiding) and so the download time becomes negligible compared to the exposure time. So three hours of data collecting (at 10 sec exposures) will only net me around 1000 images. This is normal. The images are multi Megabit size and 1000 of them will start to take up disk space, so I make sure my 256 Gbit SSD drive has enough free space (usually not a problem for this type of session). I do download the images, after the session, to a tower pc for processing.

If I am imaging planets or the moon, the objects are brighter and the exposures become much shorter (maybe a few milliseconds). The way to capture this data is to use 'live-streaming' from the CMOS camera (I cannot do this with the CCD chip very effectively). In the Indi-Control panel, for the ASI camera, you can set-up the required ROI for the object and the exposure time and then tell it to 'live-stream' to the screen. That live-stream can then be captures in a file which has a Sequence of all the frames (an SER format). Now I am capturing at a frame rate of 60 fps, so it will collect 10,000 frames in 3 minutes (which is the longest exposure time for Jupiter as the planet rotates fast enough to cause blurring in the final stacked image - unless you are able to use de-rotation in the post processing software). You can also use Firecapture software to collect these 'videos,' it has a complex but more accessible interface to the camera for adjusting and optimizing the collection process. These video SER files will be multi-Gigabit in size and I have, on occasion, run out of disk space on the 256 Gbit SSD. That will definitely cause issues.

I have set up the Ubuntu server version 22.04.3 with a lite desktop, but I added a few extensions, so I can monitor both the CPU and Disk activity (and temperature) along the upper task bar. With this I can see if what I am doing is stressing the RPi4. Usually the CPU will peak as a program starts, but then immediately settle down to a low mid range activity, as most of the time it is doing nothing while I enter the collection parameters. I make sure I have downloaded enough index files to be able to plate-solve with my smaller camera. On rare occasion, the scope gets lost and I have to "Blind" solve (not use current position information) and that has always worked to get the scope synced to its correct pointing location.

There is the rare night (maybe 2 or 3 times out of 100) that nothing seems to work, everything crashes or hangs no matter how many times I reboot. I chalk this up to a power issue, I have a regulated 30 amp 13.8 volt power supply that runs off of our AC. We live out in the country and occasionally we have 'brown outs' or flickering lights, so these things happen.

There is no doubt that Kstars/Ekos has evolved into a complex collection of programs and algorithms as it tries to do more and more. I tend to stick to the next to last 'stable' version that has been used for at least several months, so that there has been some time to work the 'bugs' out. I have three SSD's they contain; Ubuntu 20.04 LTS with Kstars 3.5.3, Ubuntu 20.04.3 LTS with Kstars 3.6.2 and now the latest with Ubuntu 22.04.3 with Kstars 3.6.8. So I always have a fall back to an earlier version which I have successfully used many times, if something comes up that I cannot resolve and it prevents me from having a successful imaging session. Of course, the latest version has many improvements that are helpful, but not always necessary. So if there is something I want to capture (like during the current Jupiter opposition) I make sure I have plenty of options.

Hope this helps in explaining another users experience. I have been using Kstars for probably the last 5 years and I am too familiar with its interface and capabilities to want to change.


I can only relate my experience. I am using an Rpi4 (8 Gb), also with a USB 3.0 SSD drive to boot from. I am running a server version of Ubuntu 22.04, with a minimum desktop, Kstars 3.6.8, Phd2, Firecapture v2.8 (the latest ARM64 version). I have an Anker 7 port powered USB 3.0 hub that connects to my CGEM, a ZWO EAF, an ASI120MC finder/guide camera (on a small refractor), and either an ASI 533MC or an SBIG 8300M (with SX filter wheel ). I have worked thru all the bugs (it does take some time), and I have been successfully using all this equipment for a couple of years. I can reach 60 fps, using Firecapture, with the ASI 533MC, as that ZWO camera does not have a high speed mode at 8-bit video.

I have the RPI at the scope (I roll my system out of my garage) and connect to it over an ethernet cable using VNC (X11VNC on the RPI). I also attach an Android tablet, (using USB tethering) to the RPi so I have a screen at the scope when I do PA. It is certainly not the fastest or most powerful setup to control the equipment, but I have not found it limiting for doing all forms of Astrophotography, from long exposure to planetary video. I am using a 256 Gb SSD and created a 4Gb swap space to help with memory issues, but I don't know if that is required.

I do have a dedicated power supply (12 volt from AC) that powers all the equipment. I have a 12 v to 5 v Buck converter to provide power to the Rpi, while everthing else is powered directly from the 12 v source or from the powered USB hub. If I can be on any more help in figuring out why your system is not functioning up to expectations, just let me know. But I can probably only tell you what I do, as I am not a pro at this (I simply Google my problems a lot!!).


Ronald Scotti replied to the topic 'Ekos Planetary Imaging' in the forum. 3 weeks ago

The latest Firecapture comes in an ARM64 version, which I have been able to load on my Rpi4 (I boot over the USB3.0 port from an SSD, I run a server version of Ubuntu 22.04, Kstars 3.6.8, along with Firecapture v2.8 ). I use Kstars to control all the equipment, up until I am ready to capture the Planetary videos, I then disconnect the main imaging camera, start up Firecapture and proceed to use it to capture videos. Firecapture does have a complex interface, but it gives you direct access to all the ROI and other functions that make it convenient in this process. (Although I have used the 'streaming' options in the Indi Control Panel for the camera to do the same). I find that the fps is limited by the camera (ASI533MC) which does not have a high speed mode for 8-bit capture, so I can only reach around 60 fps. But this has been sufficient to produce very decent images of Jupiter and its moons (using my C9.25).

My feeling is that while incorporating all the functionality of Firecapture into Kstars would be amazing (yet adding even more increased complexity to the code), it really is only necessary that the two programs work hand in hand seamlessly. If I did not have to disconnect the main camera from Kstars (I have not tried not doing this) or if Kstars could just reference a call to open a Firecapture page, that would really be a great step forward to adding planetary imaging to Kstars.

That is just my opinion.


What are you running Ubuntu 22.04.3 on? Is there a reason you are trying 'build' Kstars rather than just installing it from the PPA? I have recently upgraded to Ubuntu 22.04.3 on my Rpi 4 and I have loaded Kstars 3.6.8 and Firecapture (which is now available for the ARM processor of the RPI) directly and all seem to be working, with the usual startup kickups.


My experience under the stars tonight was very satisfying. Using a modified older profile the equipment connected and behaved as expected. I performed a PA, then moved to a star field to exercise the focus routine. The focus routine worked well (to the best of my novice ability with it), I was able to connect to Phd2, perform a cal and enable guiding (total 1.12 " until the clouds and wind picked up). I am attaching a log of this session. I monitored CPU activity and temp and both were well within acceptable ranges.

File Attachment:

File Name: log_18-51-08good.txt
File Size: 755 KB