Come and join our community. Expand your network and get to know new people!
Mirror lockup has to be supported by the camera to function. You may be able to get around that by installing Magic Lantern in the camera to add that feature and many other useful ones.
That being said, it is debatable whether there is much benefit in using mirror lockup on long exposures as any shutter vibration is often overshadowed by seeing condition and tracking accuracy since a telescope rig should be stable and have enough inertial mass to resist the vibration.
If you were overextended on a lightweight tripod, that may be a different story, otherwise don't sweat it too much and use the camera you have.
thanks for answer, but my questio in not how to configure mirror lockup within Indi ( I already set it and all works fine), but if the mirror lockuo is a must in order to take photo.
As written before, the canon 1100D has not the mirror lockuop function, so I'm not able to take pohot with this camera...
I briefly experimented with the code. Serial stacking works. Running on my standard desktop the cycle time using existing files is about 1.4 seconds for each 2328x1760 monochrome image including applying a master dark and flat each time.
I also had a brief look to DSS live. That looks nice but somehow people are only using it in combination with AstroToaster. What's wrong with DSS live only?
I assume the main benefit is that this solution can be used in Linux. The other programs look pretty established in Windows.
What to do with the input files. Maybe delete? Renaming is possible but it will produce huge amount of data since the setup is most likely operated with short exposure times. If I work this concept further out, I don't want to add more options.
Flowchart for images A, B, C, D...:
plot result 2
plot result 3
plot result 4
plot result 5
The images in the above process with master dark and flat & flats dark corrected
Viewer maintains the actual view position and zoom factor
I can confirm this. I have the same lines on my Atik 428EX Mono. I've tested it directly connected to my Windows 10 machine with Artemis & Nebulosity and stretched the images, there are no horizontal lines visible
To use mirror lockup you enable it within the camera's custom functions menu AND then set the mirror lockup delay within the INDI GPhoto driver: Enter value in seconds>Set>Save configuration.
I noticed that if I not set the mirror lockup on my canon eos 50D , after the first shot all other shots are a black frame.
So my question is if I must alway s et the mirror lockup on canon or if there is other option to set on ekos.
Some days ago I tried to use a canon 1100D (with baader filter), but this camera has not the mirror lockup function and I'm not able to take shots.... fore hre previous reason (after the first shot all others are a black frame).
Thanks in advance.
No problem with live -stacking, this for sure is a benefit, but making the Indi FITS viewer more reliable should have much higher priority. It still crashes to much taking the whole observation session down with it. Just hitting a button can already make it happen.
(last time it was the apply button after selecting log histogram). Please detach it from KSTARS before applying new functionality.
... or if the telescope slews to a new object?
Looking to some "live stacking" videos, I see the following requirements for a live stacking program for deep-sky:
1) Alignment of unguided images with an exposure time of a few seconds. (Nobody is guiding)
2) Showing intermediate results
This mean you have to stack differently. Assuming the images are A,B,C,D then ......
Simple serial stacking:
The only question to ask when to restart serial stacking. If alignment fails a few times?
han.k wrote: What is the benefit of live stacking like DSS, Deep Sky Stacker is doing? Is it to save some time? My images are often stacks of more then one night to get a very good signal to noise level and to show the faintest details.
IMO the benefit and main purpose of live stacking is making deep sky objects accessible to visitors. What used to be the view of M42 in an eyepiece (with ppl standing in line waiting for their turn)now becomes an immediately visible M42 on a screen, readily visible for everybody. For the astrophotographer it does not offer any advantages over normal stacking IMO.
Get a backtrace from gdb, typing 'bt' at the prompt after the crash, that should give more detail.
On my standard desktop ASTAP can stack 10 monochrome images of 2328 x 1760 pixels in about 11 seconds using star alignment. Without alignment it will be faster. Since there is interest, I will do some testing with live unaligned stacking using most resent images from disk. The only parameter would be the number images or maybe better a time limit. If the performance is satisfactory, executables could be provided for all major operating systems.
What live stacking speed would be acceptable?
A Lot of EAA users , which I am one, also use very fast OTA's , sensitive CMOS(but DSLR works as well !) or add Hyperstar which enables very short <10 secs exposures which are stacked and produce good details even after 3 or 4 images and not just one bright objects like M42.
it does not need to be a fully integrated into Ekos but an application that runs along side EKOS e.g. it could just be set up to monitor a folder of exposures - e.g. Astrotoaster style (Deep Stacker). Perhaps as PC CCDCIEL works well with Indi you could persuade PC to produce a stand alone (i.e. not part of CCDCIEL) application using just his "stacking" code. NOTE Astrotoaster allows real time colour (Color) etc adjustment.
Bottom line would a RPI 3/4 cope with stacking and not hinder other processors ?
Short exposure stacked images examples (especially see HiloDon from Hawaii) stargazerslounge.com/topic/276321-first-...ith-hyperstar-on-c6/
Currently INDI camera drivers transfer their images in FITS format. The only exception is the DSLR driver which can also send images in their native captured format.
I was thinking of expanding this to a standard property so other driver can make use of it. For this, I believe we need two standard properties.
For the CCD_CAPTURE_FORMAT, it selects what file format the image should be captured in natively from the device. For DSLR cameras, this can be JPG/CR2/...etc (whatever format natively supported by the camera). For CCD/CMos cameras, this can include RAW-8/RAW-16/RGB/etc..
Now the CCD_TRANSFER_FORMAT simply designates the format which INDI clients receive the captured blob as. These can be:
1. RAW (as is, native)
2. FITS (default)
3. XISF (future implementation perhaps)
This is already realized in the DSLR driver. That is, you can captured a JPEG image and transfer it as FITS or as RAW, or capture RAW CR2 and transfer it as FITS..etc. So what is suggested here is to standardize this further and make it available for other driver to implement in a standard consistent way.
Yes, you are correct, the driver works perfectly for sx cameras and I'm using it too, but I think it should work for his cam too.
I wrote a quick start manual (if it can be called like that) for you so you can do some testing yourself
sudo apt install git build-essential cmake libindi-dev libnova-dev libgsl-dev libusb-1.0-0-dev libcfitsio-dev git clone --depth=1 https://github.com/indilib/indi-3rdparty.git (directory indi-3rdparty will be created) cd indi-3rdparty cd indi-sx mkdir build cd build cmake .. (if some libs are missing then it tells you what's missing) make (if there is no errors reported go for next command) indiserver ./indi_sx_ccd (have your cam connected and run this for testing the new driver)
I hope I didn't miss anything..
If the purpose is live viewing and to prevent saturation, there is no need for alignment. If you could expose a single image of 200 seconds (with good guiding), the equivalent is simply adding 40 x 5 seconds exposures together without alignment. If done in memory this will be simple having low CPU load and memory size requirements. The only penalty will be more readout noise.
Theoretical for live viewing it would be nice to apply FIFO, first in first out but that would require adding 40 image arrays together for every update. That will probably too much but maybe a stack of the last 10 exposures is feasible with the available computer processing capacity. Applying a dark will be then the 11th action. If you work backwards, you could try to add as much as possible historical images till a watchdog says enough. But it will take a lot CPU load and memory space.
In my opinion, Object Pascal is excellent tool for doing some complicated stuff fast.
I merged the PR, can you check now?
The ASCOM driver is for Windows, and works on a Windows system, not a RPi. It's just another way to see if using the HC port affects the internal communication.
I can't do any more until November.