×

INDI Library v2.0.7 is Released (01 Apr 2024)

Bi-monthly release with minor bug fixes and improvements

Recommendations for Speckle CCD Driver?

  • Posts: 13
  • Thank you received: 0
Hi team, I have a need for a CCD driver which can capture a bunch of short exposures quickly -- about 1000 exposures of 10-60ms each. All of them should be put into a single rather large FITS file (often called a "cube") stored on the machine running the client. The application is speckle interferometry, in which we'll be processing the thousand or so images together to mitigate the effects of seeing. We'll probably need drivers for several types of cameras eventually, but getting one running will be a great first step. Also, the long-term plan is to run the process in a fully automated observatory, so interactive planetary imaging programs are great for testing but we need to move beyond them in the long run.

I'm just learning to work with INDI driver code. I believe the following to be true but am not entirely sure:
  • The driver should capture multiple images in a sequence, rather than having the client capture many individual images, because the time required to initiate each image capture is too long (having a delay between images on the order of one second would be a serious problem when we're capturing a thousand or so).
  • It should be possible to write a subclass of a camera driver which adds rapid multiple image capture and puts the resulting image data into a large memory buffer for transmission as a single BLOB.
  • This will probably require the creation of a custom client, but that's OK because we'll want one anyway.
  • Adding contextual data to the FITS header (time, GPS, celestial coordinates) is most easily done in the client, which will be communicating with a GPS receiver, the telescope mount, and probably a plate solver.

Can anyone correct these impressions or offer suggestions about the best way to proceed?

Thanks in advance
8 years 4 months ago #6172

Please Log in or Create an account to join the conversation.

Nice project! In INDI, the driver assumes ALL authority, so you can do whatever you like with it.

1. True
2. If you're talking about INDI::CCD, I don't recommend you use that since it is tailored to CCDs in the amateur astronomy market. So you'd probably want to start a driver based on INDI::DefaultDevice and add properties as needed. Regarding the BLOB, it all depends on what you are going to do with it. Is the information in the BLOB going to be used to take some action? This action is time-sensitive? or just for storage? ..etc Since all blobs are base64 encoded when they are sent, and then later decoded by the client, it is not the most efficient process.
3. You can always use _any_ generic client to control your driver. However, you can develop a custom client to process certain properties or to react to events or process them in a particular way.
4. It can be done either in the driver or the client. If it is performed in the driver, you need to "snoop" other drivers like mount to get RA/DEC. But if you are planning to write a custom client, I'd do it all there since it is easier.
The following user(s) said Thank You: Adinghi
8 years 4 months ago #6174

Please Log in or Create an account to join the conversation.

  • Posts: 193
  • Thank you received: 46
One of the issues you will run into using the process as it's typically done in an indi driver, is that these drivers are set up for typical long exposure astro work. With most ccd's there are a few steps done to 'prepare' the chip for an exposure. It's normal to flush the ccd right before starting an exposure to ensure it's a 'clean start'. With some cameras, there is an RBI pre-flash, so the infrared led will be turned on and chip exposed to infrared long enough to saturate, then it'll be read out twice to 'flush' the ccd, before opening the shutter and starting the actual exposure.

The other thing is, typical ccd's are read out very slowly to reduce noise. If you have a full frame ccd, ie not an interline, how short your exposure can be is limited by the speed of the shutter. With an interline ccd, you can go as short as you want, depending on the in camera firmware, but frame readout time will be substantial if you are reading out the whole frame. If I use the example of my sbig ST-10, readout times are on the order of 10 seconds. Using my sxv-h9, readout time is much shorter, but still on the order of seconds, not microseconds. The h9 is an interline ccd, so it is possible to do the extremely short exposure, it's just not possible to stack them instantly back to back because the ccd is not read out that quickly. I can get very short exposures with the h9. but only at a very low cadence. Cadence could be increased dramatically by only reading out a small portion of the frame.

I played with this concept a bit a few years ago, in my case I was trying to quantify seeing, using an sxv-h9 mounted in a C8. I had it pointed at the alcor / mizar pair, and took very short exposures, on the order of 10 ms, then saved them all away. I was using them later to fuss with some software to make measurements and a simulation on how a tip/tilt could be used to try improve images by 'chasing the seeing' with the tip/tilt at it's maximum deflection rates. My conclusion tho said it was a pointless exercise for a number of reasons. The deal breaker, seeing was affecting the two stars quite differently at any given instant, so if I got a perfect match on alcor, then the mizar image would balloon up in fwhm. I tested this by stacking them all using one star as the reference point, and looking at what happened to the other. But, my images were not back to back, they were short exposures at a little over 1 second intervals due to camera readout speeds. but, the data from that run was conclusive, and answered my question. Seeing is NOT consistent across the frame on short intervals, so, bumping a tip/tilt can dramatically improve things in the near vicinity of the guide star, but at the expense of resolution on the rest of the frame. The other deal breaker, no camera I could find at the time was capable of delivering the high cadence frames at the sensativity required to accomplish this. Unless there is a mag 2 or 3 type star to use as the guide star, you aren't going to get enough data in millisecond exposures to reliably drive a tip/tilt with cameras available to us today. there just isn't enough data available

To do what you are considering, you will have to quantify exactly what it is you want to accomplish in terms of frame duration, and frame cadence. Once you have a threshold value set for the minimum cadence on successive frames, then the trick is to find a camera that can deliver the data on that cadence. Most astro ccd's will not be able to do it. Once you have a camera selected that can achieve the cadence you want, then a driver has to be written for that camera. I doubt any of the current camera drivers can achieve a high cadence, and many of them are for cameras that are phyisically not capable of getting high cadence.

Off the top of my head, I'm not sure where to look for a suitable camera for this project. I wonder if the stuff that is used these days for high resolution jupiter shots would be appropriate, ie, are they sensative enough for your application ? That really depends on how bright the stars are you are targetting with this approach. If they are in the magnitude ranges of 0 to 5, then I think it's all doable, but if you are talking about going after stars in the mag 10+ range, that's a whole different kettle of fish. I know in my case, using an 8 inch f/10 instrument, those dim stars dont even start to resolve well until exposure time is reaching into the multiple seconds range. I haven't had a chance to do this kind of testing with our newer / larger telescope (12 inch f5.6 with reducer in).
The following user(s) said Thank You: Adinghi
8 years 4 months ago #6196

Please Log in or Create an account to join the conversation.

  • Posts: 13
  • Thank you received: 0
Many thanks, guys! You've given me some good direction as I get going.

I'm currently working with a ZWO ASI224MC which captures images from its CMOS chip quite quickly when running oaCapture (at least 10 frames per second, sufficient for us). The speckle interferometry guys have been using pricey EMCCD cameras for years, but these new CMOS cams are getting good enough that we can get usable data with only a magnitude or two more light, and tests done by others have shown that it's possible to use consumer cams for at least part of the science mission. It helps that each of the frames in our ~1000-frame sets doesn't need to be a particularly good image (they get averaged in the frequency domain), so typical concerns about sensor flushing, dark noise, etc. are mostly not significant to us. I don't have the numbers on the exact magnitudes which are being reached by the folks doing the testing (I'm just an engineer making a system work... :unsure: ) but I think they're getting close to magnitude 10 with CMOS cameras and 10-14" scopes. Often individual images are just grainy splatters of photons.

Time for a couple of weeks learning about drivers...or maybe a bit longer? There's a lotta stuff in there.
8 years 4 months ago #6212

Please Log in or Create an account to join the conversation.

  • Posts: 193
  • Thank you received: 46

There _is_ a lot of stuff in there, BUT, I tried to really simplify that process for folks a few years ago when I wrote the abstraction layers for various devices.

If you want to experiment a bit, and already have one of your many frame fits files handy to work with, there is a very easy path to follow. Just start by deriving a new camera from the generic indi ccd class, and it fleshes out all of the indi overhead for you. Next step, plug in a few pieces that take data from your pre-existing large file with lotsa frames, and feed them back into the framework as if they were coming from a real camera. This exercise will teach you all about the indi framework, and how it moves data between drivers and clients.

When you have that part working, and understood, it becomes an almost trivial exercise to substitute function calls for real hardware in place of the function calls that grab from your existing data files.

Are you working with 8 bit, or 16 bit data when grabbing the raw data for the speckle exercise ? If it's 8 bit data, then it may fit better in the video stream stuff than in the ccd stuff, which is all based on the assumption of one shot cameras.
8 years 4 months ago #6230

Please Log in or Create an account to join the conversation.

  • Posts: 13
  • Thank you received: 0
Whoops, if I gave the impression of indicating excessive or inadequately organized complexity, I apologize; that wasn't my intent. I just meant to indicate an awareness of the magnitude of the task at hand. Your roadmap for development sounds like a great idea, and I'm stoked to try it out.

We'll probably be working with 8 bit monochrome data to begin with, but it would be nice to have the capability of using 16 bit mono data or maybe even color in the future. The color might be helpful in getting very rough spectral types for imaged stars, to be refined by the spectroscopy folks later. I'll be quite happy to get the most basic images transferred for now, but of course designing in flexibility would be best for the long run.

Thanks again for the help.
8 years 4 months ago #6242

Please Log in or Create an account to join the conversation.

  • Posts: 193
  • Thank you received: 46
When you are working with color, for 'intelligent' hardware, it's exactly the same as working with mono. Applying whatever filter matrix they glued onto the chip is an 'after the fact' bit of math. I think for your application, if you are dealing with a camera that mangles the color information into rgb triplets before it hands the data back to you, then it's already manged the data to far for any value doing the type of analysis you look to be doing.

But now I'm getting rather curious about your project. I've got a 12 inch RC which will soon (febrary / march timeframe) be housed in a roll off, and I'm kind of on the lookout for a real 'meaty' project to tackle with it.
8 years 4 months ago #6245

Please Log in or Create an account to join the conversation.

  • Posts: 13
  • Thank you received: 0
Heeeey, a 12 inch RC in a ROFL and running INDI? Sounds great. I'd be looking into something more than an 8" SCT except I live really close to a foggy ocean...

As for the project, I'm doing hardware/software development with a group of astronomers who span the range between serious amateurs, college students at various levels, and professionals. We're trying to get a pipeline going for measurements of double stars -- lots of them, hopefully dozens and eventually hundreds per night, with numerous 'scopes participating. The idea is to determine orbits astrometrically through speckle interferometry and work with the spectroscopy folks who can measure radial velocities. Combining this with distance data from Hipparcos, Gaia, etc. we can hopefully get a large number (the goal is thousands; we'll see) of stellar mass measurements which should enable the astrophysicists to considerably refine models of stellar evolution. Useful contributing data can come from scopes as small as about 10" to the largest whose boards are willing to allocate some time to us. It's not easy getting time on 4 meter plus scopes, but we've had more luck with smaller ones.

In the short term, I'm hoping to put together a self-contained speckle imaging rig which can be taken to various observatories where we can get guest time on 1-2.5m scopes. This gizmo will produce FITS cubes with lots of short-exposure images and the previously mentioned context data. Since it needs an acquisition camera (there seem to be lots of larger 'scopes which need a little help with pointing) we'll probably rely on plate solutions for acquisition and fine pointing, then switch to the science camera with a Barlow to get us to around f/30 - f/50. Simple, right? :whistle: Anyway, the group is always open to new participants and ideas. We haven't even got a secret handshake.
8 years 4 months ago #6268

Please Log in or Create an account to join the conversation.

  • Posts: 66
  • Thank you received: 2
All that sounds soooo interesting to me!! For at least a couple of good reasons:
First, I'm part of the Gaia data processing consortium (DPAC) with a specific responsibility (Solar System data processing), within an international collaboration of astronomers that also includes double star processing! So, no surprises that I'm sensible to Gaia-related science exploitation... ;-)
Then, getting closer to the technical issues: all that is described here would be very well suited for automated observations of stellar occultations by asteroids, another field that will undergo a revolution due to Gaia! The approach that I was thinking about is very, very similar: plate-solve a CCD image of the field, fine point on the target source, then use the "science" camera (typically a CMOS or CCD based sensitive/rapid camera) to perform the science acquisition (typ. speed 10 frames/s).
The only difference, in my case, it that we need an accurate time tagging, so time information should in general be managed at the level of the server (for example, I have a ~100€ arduino-based GPS time-box that gets in input a shutter signal and then sends the corresponding timing by serial USB, providing 1 ms accuracy).

The problem: I am also relatively lost when it comes to driver programming in INDI, unfortunately. Not much time to devote to learning. But we could team up - I'm available for helping when I can, for testing at least (I have an INDI driven C14)!!

Best regards
Paolo
8 years 4 months ago #6282

Please Log in or Create an account to join the conversation.

Time to create page: 0.560 seconds