This shows one of the issues / problems having a conversation by typing into web browsers, sometimes what one feels is clear because 'its obvious to me' the point one is trying to get across, doesn't come across on the other end the way it was intended, results in misunderstanding, often aggrivated by folks typing in a hurry.
This is the point I was originally trying to get across. With the drivers that include vendor provided binaries, it's very difficult, and beyond the scope of most folks involved in this project, to create a test jig that includes the vendor provided binary code and introduces the test points _under_ that layer. for the most part, in those cases it's done by 'hooking up the hardware and see what it does'. But all of the layers above that, are prime candidates for testing setups. This was why I originally started writing device simulators when I first tackled updates to the indi project. At least with the simulators, we have what should be a 'known good' piece of code pretending to react like a physical hardware device, and allows for end to end testing of the system from client thru to indi server and down thru base classes, but excluding the various hardware specific layers. Ie, using a camera simulator with a correctly install gsc catalog, one can test code in a client application, and that code will receive data that includes correct star fields, including mount periodic error if so configured, etc. In essence, the simulator devices _are_ the unit test jigs for all of the layers above that. Maybe not yet fleshed out as far as you are suggesting, but they are the basic framework in this case.
To do a proper job of setting up a test environment for indi, yes, it's feasible to introduce the unit testing concepts for _most_ of the code beginning with client connection and following the data paths on thru to devices, but some of the devices themselves are only suitable for black box style of testing where inputs are provided, and one looks to the outputs to see what happens because we dont have access to the internals of those components. Using the camera examples above as such candidates, I guess I _could_ spend an inordinate amount of time on a libusb hook set to test inputs and outputs for one of those cameras, but I have niether the time nor inclination to take on that magnitude of a project, I'd rather spend what limited time I have for astro software on client analysis stuff.
As mentioned earlier in the thread, I have another new driver set on the go here, this one for a commercially produced dome that is new to the market, manufactured by a company not far from here. I have a unique opportunity with this one in that not only do I control the code in the driver, I control the firmware in the dome, and it WAS written in a manner as you describe, test jig on the bench and each and every interaction with hardware had code written specifically to test that function, then the final all up integration for the firmware includes individual functional tests that can be triggered from the host. But I'll be the first to admit, the test set is not as rigorous as the ones I used to write when I was working in the DO-178B style of development environment, there is no need for that level of rigor in this case. After we had the hardware all working correctly the next step was to build environment specific drivers, one of which was an ascom driver for windows, the other an indi driver. The other unique aspect of this project, all of the drivers AND the firmware for the device are destined to sit on a github account, so it'll all be open source. This is a vendor that 'gets it' in this respect, they are very clear on one aspect of this project, the company expertise is in plastics and mechanicals, they dont view the software etc as 'secret sauce' and see tremendous value in having an implementation that hobby folks can tinker with. I ended up with this project because I wanted a couple of domes, and when I contacted them regarding the state of development for automation back in January, I was asked if I'd be interested in doing a fully open solution for them.
In the process of doing this project, one thing I came across is the ascom conform test suite, far from perfect, it attempts to reach some form of testing. It appears to be a client program that will exercise a device testing for inputs and expected outputs. It is essentially a black box form of testing. I think indi could benefit greatly from something along those lines as a starting point for automated tests. what I would envision for that, is something along this line.
a- Client program starts, then spawns an indi server with a pipe for driver startup
b- Walk thru the entire list of drivers, telling the server to load, then unload each one. This will catch any number of errors, even without physical hardware in place.
c- Test the indi server components by handing the server a list of simulator devices to load and then connect to each of them
d- Walk the simulators thru a full set of tests on the 'required' entry points
That would be a starting point that gives a full reference set, with the side effect of stressing the indi layers in the process. Once that starting point is in place, then the environment can be expanded to start testing individual drivers for function. Some of the drivers can only be fully tested with hardware in place, but others could be outfitted with test points. Again, using the dome example, I'll pick on Maxdome in this case. It's a proprietary binary protocol partially reverse engineered. Yes, it's possible to write an underlying test jig for it, but, there are a very limited number of folks with that dome anyways, and since the person who wrote the driver has one to test against physical hardare, and is today probably the _only_ person using that driver, I'm thinking the time/effort spent on a test jig for it is better spent on properly outfitting the basic indilib environment with better testing / functionality. It's simply a case of allocating limited resources to the spots where they give the best bang for the effort.
The way the ascom folks do it, when I run the conform tests on the dome driver, final output is a test sheet showing pass/fail on each of the tests, along with a hash for the driver that was tested. When I submit the driver for inclusion in the downloads, it must contain the conform report and the hash in that report must match the driver submitted. That's a workable way for folks dealing with binary only distrbutions, but, doesn't fit well with a source distribution.
BUT, there is a way we could solve some of the issues that originally triggering this thread, and with a little more thought, we could fully automate the process. I'm envisioning a 3 tree setup similar to the way debian does it.
Unstable - code builds in the test environment, drivers load - no functional tests completed (no hardware present for testing), ie it has passed the first set of automated tests in terms of building and loading.
Testing - code builds in the test environment, drivers load and function correctly against test jigs for those that have test jigs.
Stable - code builds in the test environment, and has been tested against physical hardware at a location that has the hardware.
We have recently moved to git for hosting the indi project, so, my first thought is, each of these trees gets packages tagged with a specific get revision. So using the example of a driver where I am the maintainer, and I have the physical hardware.
1 - automated build initially populates the unstable tree with a package that successfully built and loaded. this happens on every run of the automated build system.
2 - For a driver with test jig points included, it will bump up to the testing tree after an automated run passes the test jig setup. this probably doesn't happen as often, and, each driver bumping up to this level is tagged with the git revision used to build / test.
3 - Driver bumps up to the stable tree when it's been built, and tested against physical hardware, and a test report is generated that includes the git revision tag at which it was built.
For this setup, somebody with hardware in the field would keep the repositories pointed at 'stable' to ensure an apt-get update doesn't bring in broken / untested stuff.
Within git, we can run 3 separate branches, trunk, testing and stable, then as various pieces pass various parts of testing, that code can be merged from trunk to testing and later from testing into stable.
And yes, this is a very ambitious addition to the project as a whole, but, it would vault us up into a much higher level of professionalism in the development process. The real question then becomes, does anybody have the time / inclination to follow thru, it's a big task. but there is a fairly strait forward direction to accomplish it incrementally, first we get core indilib set up for automated testing with just simulator drivers under test. The next phase would be to start tackling drivers on an individual bases, define how each could be rigged for automated tests, then implement.
I think one major outcome from all of this, with an automated set of client tests that generate reports from start to finish we make the tasks for folks writing client software much easier, because we introduce a consistent set of expected responses. As an example, today, different drivers of the same class can and do present significantly different responses at times, mostly related to timing, order of events, and some of the state conditions. A lot of that would start to 'go away' if we have a definitive set of tests that get run, and they trigger all of these conditions.
I offer one example from my experience doing the ascom dome driver. There is a status for 'slewing', and my original interpretation was, that meant the dome was turning. But when running the conform tests, I found an error, both in my interpretation of the state, and in the way conform tests it. Turns out, the test really only considers dome 'open or closed', but this one has the ability to control shutter altitude as well. To make timing and order of events issues go away, I had to include either dome rotation, or shutter moving in the 'slewing' state. Without doing that, the test program would issue a movement command to the shutter, and immediately proceed as 'completed' even tho the shutter was still in motion, then it issued the next shutter movement command. Conform would test that shutter was open, and test slewing, and expect the shutter to be in it's final state when open was true and slewing was false. it never once queried shutter altitude to see if shutter was indeed at the requested altitude.
My guess is, if we start doing that kind of automated testing against indi drivers, we will be amazed at how much inconsistency there is from driver to driver in terms of how some of these states are returned during different phases of operation.