Andy Kirkham replied to the topic 'INDI Testing Framework' in the forum. 8 years ago

it's very difficult, and beyond the scope of most folks involved in this project, to create a test jig that includes the vendor provided binary code

The point about Unit Testing is you exclude the vendor provided binary. Once again, we are testing your code, not the binary.

For example, a binary has an exported function MoveTo(double RA, double DEC); Now, your driver at some point in it's life (in some function you write) will call that driver function. What we are testing is your driver makes that call with the correct RA and DEC arguments as expected given the inputs and the state. We do not call the binary at all, we don't even load it.

It's not impossible for most people to create what I am talking about. It's actually much more simple than you might think. But yes, if you have never seen this before it looks alien first time you see it. But that's just learning, we all learn as we move along.
This was why I originally started writing device simulators when I first tackled updates to the indi project. At least with the simulators, we have what should be a 'known good' piece of code pretending to react like a physical hardware device, and allows for end to end testing of the system from client thru to indi server and down thru base classes, but excluding the various hardware specific layers

Using simulators can test the clients, it can test the base class and the derived indidriver class makes the correct callback but it cannot test your driver. For example:-

indi/libindi/drivers/dome
  • baader_dome.cpp
  • dome_simulator.cpp
  • roll_off.cpp
In your model we test indi Core, Indi derived driver subclass and we finally test dome_simulator.cpp
If we were to run code coverage analysis which two files pop out from that list as untested? That's right the two we actually want to test and the one we cover we don't care about because it's only a simulator that will never drive anything other than a client integration test.

That's the difference here, you describe system wide integration tests (and not actually testing the driver unless you own the hardware). So in your world this type of testing falls short because the code you wrote in those drivers is not excerised by your test suite.

In order to test those missing "real" drivers you need to introduce two things, dependency injection which pulls in the second thing, mockable abstraction. You stated earlier these "test jigs" are "very hard and beyond most folk". They are not once you understand the concept and see it in action, it's actually pretty simple.

Simulators are only useful for those without the hardware to "get going" until the hardware they ordered arrives or for client developers who don't have the hardware. For driver writers they are useless for automated testing because we don't want to test the code in the simulator, we want to test your real code in your real driver.

You may believe that that code is really complex and hard to test. I disagree. Yes, the driver logic may well be complex itself. But when it calls IDSet*() in core that just spits out "something" to stdout. That's all you need, the ability to capture that output so you can analyse it against an expectation of what should be being sent to stdout for a given test.
I guess I _could_ spend an inordinate amount of time on a libusb hook set to test inputs and outputs for one of those cameras

Not so, I know for a fact I can mock both Indi's hid_device AND libusb in an evening. Job done, and done once, reuse MANY times for all test. The key that takes the time is refactoring your to use the dependency injection in the drivers and then retrospectively designing suitable tests for legacy code. It's those tests for the legacy existing code that will take time for sure. But what's the rush? Introducing the DI won't break anything, the driver will still function perfectly without a single test. But it doesn't have to be done in one go. Do it one driver at a time and one driver API function at a time.
In the process of doing this project, one thing I came across is the ascom conform test suite, far from perfect, it attempts to reach some form of testing. It appears to be a client program that will exercise a device testing for inputs and expected outputs. It is essentially a black box form of testing. I think indi could benefit greatly from something along those lines as a starting point for automated tests. what I would envision for that, is something along this line.

This is what I am advocating. The difference is rather than one big huge client that tests the entire world you break it up into unit tests that exercise API functions. So here is my question. You say "testing for inputs and outputs". That's what I am saying. You control teh inputs, that's easy. Exactly how are you going to get those outputs? In my world that's where the DI and mocks come into play. It makes it simple. But essentially we are talking about the same functionality here, just now discussing how to actually do it. Just trying to redirect stdout will only take you so far. Mocks provide the ability to return values which in other ways you would need to try and use stdin. stdin/stdout redirection is way to messy. Mocks are perfect, they were designed for it, specifically Google Mocks in Google's GTEST suite.
The way the ascom folks do it, when I run the conform tests on the dome driver, final output is a test sheet showing pass/fail on each of the tests

That's exactly what Google Test will do for a pass. For a fail however, you get a ton of extra information about the nature of the failure. Not just a "fail, game over, you lose" which is pretty much all integration tests can tell you.
Unstable - code builds in the test environment, drivers load - no functional tests completed (no hardware present for testing), ie it has passed the first set of automated tests in terms of building and loading.

Testing - code builds in the test environment, drivers load and function correctly against test jigs for those that have test jigs.

Stable - code builds in the test environment, and has been tested against physical hardware at a location that has the hardware

This is pretty much standard in the open source world, it's normally call, in git dev, "release tags" (stable), master branch (pre-stable), develop (unstable), other branches are down to developers and they should merge to develop when ready.
And yes, this is a very ambitious addition to the project as a whole, but, it would vault us up into a much higher level of professionalism in the development process. The real question then becomes, does anybody have the time / inclination to follow through, it's a big task. but there is a fairly strait forward direction to accomplish it incrementally, first we get core indilib set up for automated testing with just simulator drivers under test. The next phase would be to start tackling drivers on an individual bases, define how each could be rigged for automated tests, then implement.

It is ambitious yes. But we already made a start with this discussion. There are two types of testing:-

1. Unit tests. These run on every commit of code, they are automated and hold your contract to account for APIs.
2. Regression tests. These normally happen when you form a new branch from master into a release branch. Once you have a release branch you regression test it and run any automated integration tests you have (usually requires some human input). You may get bug fixes here but they have to be very minor. Major bugs cause release process abort as you are not ready. Once you release, you tag and then merge back to master and the branch can be deleted. New cycle starts.

I hope this is understandable. The key point is to test as much code as is possible. I think simulators are only useful for client and integration tests. But they don't reach the real complex driver code base at all. That requires DI and mocking.

Read More...