×

INDI Library v2.0.7 is Released (01 Apr 2024)

Bi-monthly release with minor bug fixes and improvements

INDI Testing Framework

  • Posts: 193
  • Thank you received: 46

Replied by Gerry Rozema on topic INDI Testing Framework

indi already provides wrappers for the tty functions.

www.indilib.org/api/group__ttyFunctions.html

Guess I'm not clear on what you are trying to do by introducing a second layer of indirection there, short of building an emulator to respond akin to each and every different kind of hardware device. This was the reason I originally wrote the camera and mount simulators, to allow for client development without actually hooking up to physical hardware. I believe the simulators get used a lot for testing and debugging the layers of the server as well these days.

Testing things for wether or not they build correctly is easily done, but, including hardware specific test functions, not so much if you dont have the hardware, which is the bane of a project like indi. To test a driver often involves having very specialized and often expensive equipment, so, it's difficult to test it all.
7 years 10 months ago #8689
The topic has been locked.
  • Posts: 24
  • Thank you received: 6

Replied by Andy Kirkham on topic INDI Testing Framework

Your wrapper functions directly call OS systems functions.

Do you know how Google Mocks and DI work? The idea is to replace the direct calls with mockables. If you are familiar with Google Mocking you will understand what I am trying to achieve here.

Likewise, the ID* functions directly use printf() to shove stuff down stdout pipe. Yes, we could redirect that but I think you are missing the point. If we cannot introduce something like Google Mocks then your automated unit testing will be seriously restricted. And by the time you refactor the code to use a home brew solution no one will want to look at the code anymore.
7 years 10 months ago #8690
The topic has been locked.
  • Posts: 456
  • Thank you received: 76

Replied by Derek on topic INDI Testing Framework


The use of this second layer of indirection (via interfaces) is absolutely crucial if you want to write testable software in the java world I work in.
Its what using DI frameworks such as djuice and springframework so popular in terms of testability (again coming from a java background).
When I say tests and testing I mean automated not manual.

Sorry if I'm a bit blunt :-) The c++ world is a bit alien to me and I'm a testing fanatic when it comes to unit testing and code coverage :-)
The following user(s) said Thank You: Andy Kirkham
7 years 10 months ago #8691
The topic has been locked.
  • Posts: 24
  • Thank you received: 6

Replied by Andy Kirkham on topic INDI Testing Framework

Gerry,

It's not a second layer. Your first layer you pointed to are "helper functions", not a replaceable interface. All calls to, for example, tty_connect() will do exactly what that function does. There's no opportunity to swap out tty_connect() with an different function (mock).

The Mock is placed around your helper functions. Then, using Google Mocking it's possible to swap out one interface for another. Google Mocking is an advanced mocking system that allows you to use EXPECT(TIMES()), RETURN(whatever) macros.

I will say if you have never seen or used it then yeah, it's a bit alien. But once you see it in action you'll understand more.

[edit] adding ref github.com/google/googletest/blob/master...k/docs/ForDummies.md
Last edit: 7 years 10 months ago by Andy Kirkham.
7 years 10 months ago #8692
The topic has been locked.
  • Posts: 193
  • Thank you received: 46

Replied by Gerry Rozema on topic INDI Testing Framework


In fact, I know nothing about either of them.

But enlighten me. Use the example of one of the drivers I recently submitted into the repositories, it works with a Temma2 mount, which I have in the corner of my office right now. How will introducing a new layer help in testing that driver ? It's not possible to test it without the hardware, so, at present I believe I'm the only one in the developer group that can test it. It's not possible to watch the mount slew correctly if the serial io is sent off to some other destination.

so what I'm missing here, is this. How does an extra layer of abstraction help in testing device drivers ? With real hardware in place, all it does is add extra overhead to every communication cycle with the hardware. Without the hardware hooked up, you cant test the device anyways. so what am I missing ?

I have another device I'm working with here that's not yet released, it's a dome driver, but I am writing the firmware for the dome hardware as well. I know all the basic dome functionality is working correctly because it's inherited from the base dome class, and the indi driver layer just provide i/o between the physical hardware and the underlying base dome code. the base dome can be subject to automated testing easily enough, using dome simulator, which doesn't talk to real hardware at all. The driver layer cant be tested without real hardware.

So what exactly am I missing ?
7 years 10 months ago #8693
The topic has been locked.
  • Posts: 24
  • Thank you received: 6

Replied by Andy Kirkham on topic INDI Testing Framework

ok, I will go look at your code and have a ponder.

But, here is teh really important bit I think you are missing. You are NOT testing the hardware in the corner of the room. That's someone else's job (the manufacture).

You are testing YOUR CODE. So, as long as you code emits the correct data that would have gone to the mount your code passes the test. No need to watch a mount actually slewing at all. You just have to ensure YOUR CODE (under test) emits the correct command with expected data.

That's where the abstraction comes in. We need to capture that command as it leaves and check it's correct.

If you use tty_write() then it's gone. No chance to check it. But if we wrap around tty_write() we can capture WHAT YOUR CODE sent and check it was what we expected should be sent under that condition. That's what the swappable interface gives you.

Remember, we are testing code, not hardware. Travis-CI doesn't have your mount, a dome, a CCD, etc etc etc/all possible hardware. So we need to monitor the IO of what the code is doing against an expected set of preprogrammed outcomes.

For the most it can be somewhat tiresome I agree. However, think like this. Imagine you had a full unit test suite in place from the start of your project. Then, in 2 months someone reports a bug. In your world only you can fix it because only you have the mount. In the CI world what you do it write a unit test that recreates the bug actually happening. The unit test will fail. Fix the bug and the unit test will now pass. And for all time to come that unit test will be there to prevent a regressive bug coming back because it runs every build.

I will go look at your code as my first UT target (I was going to do something simple like a dome, but hey, lets get real world and demonstrate this)
The following user(s) said Thank You: Derek
7 years 10 months ago #8694
The topic has been locked.
  • Posts: 456
  • Thank you received: 76

Replied by Derek on topic INDI Testing Framework


Yes!!!. This is the ideal.
7 years 10 months ago #8696
The topic has been locked.
  • Posts: 193
  • Thank you received: 46

Replied by Gerry Rozema on topic INDI Testing Framework


In the virtual world of feel good, that's how it should work. but I live in the real world, and have been doing open source stuff for decades, and in most cases historically it has involved starting out by reverse engineering the protocols, often to an incomplete description, but 'complete enough' that we can write a driver that works. I'm not sure there are many astro manufacturers that could care at all about wether or not this stuff works, vast majority will tell you 'download the ascom driver', a few exceptions.

The day may come when manufacturers are testing the equipment against indi subsystems, but we aren't there yet. Heck, over half of them wont even release a spec on how to talk to the hardware. Without specs, you are hamstrung into the world of reverse engineering protocols, and trying to figure out why the hardware does what it does at various times. This is not possible without hooking up to real hardware.

The temma driver in this case, was developed from a rather incomplete description of the temma protocols, and I dont think tak has EVER published a spec on how it works, at least not one I've ever been able to find. My experience with various drivers to date:-

Synscan - I got an incomplete spec in the manual of the EQ6 when we bought it. That spec has since had an update from the skywatcher folks, but that update too is incomplete. I figured a few things out by looking at the celestron specs, then poking the synscan controller to see 'what else' it responds to. It makes a big difference for some things which version firmware is in the hand controller. Example, with earlier firmware versions, as the hand controller for alt/az and you get back HA/DEC co-ordinates. Later versions, ask for ALT/AZ and you get back ALT/AZ for current location and time. BUT, send it an alt az goto, it treats it like HA/DEC co-ordinate space in some cases, and like 'who knows what' in other cases. This depends entirely on hand controller firmware versions, some of these things change drastically between versions. But one thing I can say with absolute certainty, it does NOT behave as the skywatcher doc suggests in a lot of the obscure cases.

Temma2 - I found a hack spec online, which was used to write the original (long since abandoned) indi temma driver. I started with a blank piece of paper, old code, a mount and a terminal program. Took a few evenings, got it all working. There are still some things not well understood in it's serial protocol, but, the driver works in spite of it.

Starlight Xpress - Got a copy of a document from SX quite some years back, and found a dos hack online. Between the two, I was able to write a driver that worked with our SXV-H9 cameras correctly. this was one of the better projects, because it didn't involve reverse engineering, actually had doc from the manufacturer about how it _should_ work. Peter later took that code base and expanded it to cover more cameras etc.

MaxDome - I didn't do this one, but, I did make extensive use of the information in the maxdome indi driver thanks to somebody else's efforts on reverse engineering. Like above, the reverse engineering is 'good enough', can make it work, but there are points even in the indi driver where the comment is 'do this cuz the ascom driver does, not sure what it is doing'. I wrote a dome simulator which pretends to be a maxdome based on the indi driver, it worked great with the indi stuff. When I hooked it up to the maxdome ascom driver, that one gets sick, and I cant figure out why, it's not making any requests unsupported in the simulator, but it must be expecting some other idosyncracy we aren't aware of.

NexDome - This one is not released yet, and in fact, just had it's first real live test hooked up to a real dome only half an hour ago over in the vendor's shop. This project is DRAMATICALLY different in that I wrote the drivers (ascom and indi), as well as the firmware for the controller. This is a vendor that 'gets it' with respect to open source, and when I took on the project it was on the understanding that all the code, firmware and drivers, would be covered by the GPL and released via a github repository. The type of testing you describe is possible with this one because all of the protocols will 'well known' by being published (soon).


But these kind of hurdles are precisely the reason I wrote the original incantation of the base classes in the manner I did, to fully isolate the indi layers from the hardware layers. Using the dome example, it's possible to fully test all of the dome functionality with dome simulator, then the hardware layer need only be concerned with correct communication between the driver and device. The whole works above the communication layer can be tested by simply using the simulator drivers.

Dont get me wrong, I'm all for some form of automated testing setup, but, I think it can be realistically done using the simulators for that purpose to fully exercise the entirety of the indi layer. For most of the hardware layer drivers, building the test jig for the underside of a test system will be a lot more work than hooking up to real hardware, bordering on impossible for some due to the timing idiosyncracies and unknowns of the hardware. Again, using an example of one I'm working on right now, the dome. To unit test anything dome related that doesn't involve hardware idiosyncracies, that can be done with the dome simulator. But to test the driver itself, it has to respond correctly to various requests, and it has to include appropriate timing. Having written both sides of the communication already, I think building a test jig to simulate the hardware would be as much work, if not more, than the original firmware development, and it'll always still end up being 'just a little different'. the test jig will have to keep track of motor positions, motor speeds, stepper counts, figure out where the dome is in proper real time, respond for sensor activations (home, shutter open, shutter closed, battery levels) etc. You cant send it a 'close shutter' command and have the test jig instantly respond as 'closed' if you are trying to do a proper test, because in the real world, it takes a varying amount of time for that command to complete, dependant entirely on where the shutter was positioned at the time of the request. If instant response to a close is 'good enough' for testing, then that's exactly what the dome simulator is for.

In an ideal world, yes, we can all work from a manufacturer spec and make drivers that are bullet proof against that spec, but, that's not the real world. In the real world, writing a driver for indi often begins by figuring out how things work, often _in spite_ of manufacturer attempts to prevent that. Equally often, it involves trying work with binary blobs that are horribly broken, and most of the effort is trying to work around manufacturer shortcomings. QHY and SBIG are two glaring examples of this, go to the website they claim to have support, but when you start writing stuff to link with the manufacturer blobs, much to your dismay, things dont work as documented and the majority of the effort is trying to figure out ways to make the darn thing actually work.
The following user(s) said Thank You: Jasem Mutlaq
7 years 10 months ago #8697
The topic has been locked.
  • Posts: 24
  • Thank you received: 6

Replied by Andy Kirkham on topic INDI Testing Framework

It doesn't matter if you forward or reverse engineer anything. The engineering aspect is always the same. You need to figure stuff out. Whether it's solve new problems of your own or solve other peoples problems in reverse engineering it's all the same, your eventual output is code.

And yes, I live and write code for the real world too. I've written code for automated trains, for aeroplanes, etc. They all move slower around the World just like a dome or scope. The point is automated testing is about testing the code you end up writing as a result of your engineering efforts, whether that effort is based on forward or reverse engineering.

Your simulators are manual. You, a real person in the real world, has to click stuff. And if you want automated tests then you will need to automate those simulators. Congrats, you just re-invented a wheel that's already been solved elsewhere.

And as for timing, like a dome or scope moving in real time. You miss the point. You mock the clocks too so your software believes it took 10 seconds for something to happen but in fact was a couple of microseconds. If automated tests had to wait real world time nothing would ever build because new commits will land in teh repo before the last build completes! Mocks are the key.

But I'm not going to battle with on this. If you don't want automated tests then fine, I'll pass on this and not waste my time.
The following user(s) said Thank You: Jasem Mutlaq
7 years 10 months ago #8698
The topic has been locked.
  • Posts: 193
  • Thank you received: 46

Replied by Gerry Rozema on topic INDI Testing Framework


I'm not trying to battle, I'm trying to understand how it'll work for automated testing on some of these drivers. But I think we disagree on one fundamental detail, your implication that the 'eventual output is code'. In fact, that's not the case as I see it here with our observatory. I write code to make our hardware function the way I need it to function, but the eventual output is a data analysis. Before we moved, my wife and I were heavily involved in doing exoplanet transit confirmations, and once our domes are in place, we'll be going back to that project. Our output is not code, it's transit observations, be they positive identifications or null results.

Temma was probably a poor example, because it is fully fleshed out in the sources. But use the SBIG camera as an example, there is a binary blob that is produced by the vendor, it's supposed to behave in a specific fashion. Would be great if it did, but, it doesn't. With some of the cameras, apparently it does, ie with the newer 8300 types. But use that same blob with my ST10-XME, things change. some of the functions work as expected, some dont. Requests to the vendor for updates fall on deaf ears for the most part. the simple example, if you have an 8300 based unit and tell it to turn the fan on, it turns on. But with the st-10, tell it to turn on the fan, fan turns off, and nothing seems to trigger it to turn back on.

so the question I've been trying to get answered, but obviously not clearly, how are we supposed to build something around the vendor binary to test an all up driver ? Is the expectation that we start writing test jigs to simulate each of the various hardware vendors equipment that somehow wrap _under_ that vendor provided binary layer, or do we introduce a new thing that has the same interface, but pretends to talk to the usb connected camera ? Is a test considered a 'pass' if the result meets the spec of what the vendor says it _should_ do, even if the real hardware does something different ? Or are you suggesting we hook blobs like that with a replacement for libusb, and write something that responds the way the hardware does via that set of hooks ?

I'm not at all against automated testing, all for it, but trying to understand _how_ to test some of these drivers. Serial port driven stuff is probably a poor example, most of the complex hardware uses a proprietary usb protocol that is totally undocumented, and serviced by a binary that _supposedly_ understands all the different variations of hardware that vendor produces. SBIG and QHY are two examples that provide endless trouble, there are more.
7 years 10 months ago #8699
The topic has been locked.

Replied by Jasem Mutlaq on topic INDI Testing Framework


I don't think we should pass the opportunity for automated testing. However, we should approach this carefully and progressively. How about a middle ground approach? A lot of issues, like the issue that brought up this very subject, was due to a bug in the core library which automated sets would certainly detect. How can we implement this without a complete rewrite of several of the core components?
7 years 10 months ago #8701
The topic has been locked.
  • Posts: 24
  • Thank you received: 6

Replied by Andy Kirkham on topic INDI Testing Framework


Progressively for sure. It's not going to happen overnight to the entire codebase. It'll have to come bit by bit. And I don't expect all INDI developers to jump on it right from the start. I will agree that having UTs in place can be something of a bind to try and start doing right from the start. It has to be a gradual process and most likely led and championed by someone. But that someone (or team) have to promote it and educate where required.

In my conversations with Gerry I offered to look at his driver. Then he swapped from that to another. So, lets not dance on this. What I suggest is this. Introduce the base framework. And then write one UT specifically at the bug this issue raised in the first place and put that in place. Once we have that we can come back around the table and decide exactly how you want to proceed. Maybe UTs for Core but not drivers? Or drivers can get UTs if their maintainer wants them? There are so many routes to take but that can be decided as we progress.

So this evening I will look for that bug you fixed (from the original issue) and UT it. Then we take it from there.

However, I would like to respond to Gerry's last post where he raised some legitimate concerns:-

The vendor binary is a shipped driver with an exported API. What happens when you call that API is of no concern what so ever to you. We are talking about testing your code, not their binary. However, this doesn't answer your question at all because your expectation is that something in the real world happens when your code does something. This is where you are missing the point.

Lets explain this by example which I hope is clear.

First, an ideal situation that doesn't happen very often but we all aspire to. Lets imagine for a second a vendor ships a binary driver and a PDF document that describes the function calls it exports that you consume. Lets assume this documentation is perfect and the driver is bug free. Let's also assume it's simple, opens and closes a dome roof. One function "root(arg)", one arg, "open" or "closed".

The natural instinct here is to write a program that opens and closes the roof. It's simple right? But will that work at my house? Don't know, I don't own one to try it on. This type of testing only works for you and others who own one. But who cares, why would I want to know? I don't own one. But the point here is testing is limited to only those who own one. No possibility of automated tests here except in a severely limited environment. But like I said, who cares?

So, now lets move onto a real world case. One we all experience everyday. Same as above but for whatever reason the vendor now believes the building moves and the roof stays still. They think "open" is now "closed" and visa versa. So, your expectation of this specification is easy but when you try to implement it then it fails. Is this a failure in your expectation? Or is it a bug from the vendor? Who knows, all we do know is the outcome is unexpected behaviour. Something needs to be fixed. So, after some investigation you change the way you call their API and you get the expected outcome. You may even leave a comment in the code lambasting the vendor. But who cares, it's fixed.

Six months later Johnny buys a dome, installs it and fires up your code. And it goes the wrong way! Little does Johnny know he's wired it back to front. But hey! Look, the software is backwards, he fixes it and commits. His dome works. Two weeks later you upgrade and..... Now, clearly it would have been better if Johnny hadn't done that. That's where unit tests come in. Swapping the fuctionality would have "broke the build" and his commits would be pulled pending further investigation. Now yes, Johnny could have also smudged the unit tests to pass also. But the important point here is the unit tests are the specification. Change them at your peril! That's TWO mistakes if you do that. When the build breaks stop! What have I done?! is what is scream at you because a failed unit test is a specification/API breakage.

The above is very simplistic. It's more likely Johnny will find a "almost suitable" function and bend it slightly to meet his needs but breaking your in the process. Unit tests are your contract that in future the underlying code is sound and meets the specification.

Also, it's worth noting on how you build with unit tests. There are those out that that preach TDD (writing tests BEFORE you write code). I actually suck at this myself. I'm like "I don't know what I want to test yet, I need to write some code first!". That is not true TDD. But my take on it is if you commit UTs along with the code together you did TDD. It's tested.

I tend to achieve this by being in one of two states during development.

State 1, "the clean designer". Here, the spec is clear and the vendor API is fault free. Implement something, test it, glue it with a unit test, repeat (until done).

State 2. "the oil stained engineer". Here, State 1 was going great but then suddenly the vendor's API behaved in an unexpected manner. STOP! As an oily engineer I swap hats and start an investigation (which often involves writing spike code which you throw away). The outcome of which clarifies the specification (whether that be talk with the vendor or more often than not, reverse engineer). Once you have a clear understanding of the spec again swap hats back to State 1, implement, test, glue it in place with a unit test.

That tends to be my workflow. The unit tests that "glue it in place" are a true reflection of your final specification (after fixes if needed). Your working code is the implementation of that specification.

Lets come back to one more point. Imagine Johnny changed your code and committed it. Johnny doesn't like TDD, not only did he not fix the UT to match his change, he didn't even bother to run the test suite at all. Now, if that change lands back with you in a future upgrade you don't notice then that swap could actually damage your dome (and maybe a telescope?). You are going to have to pay for that! And the only way anyone will catch this bug is when your roof hits your telescope. Ouch. It doesn't get more real world than that. Unit tests would prevent that.

It all works ok if it's only you, your code, your dome and your telescope. But the moment you not only make it public it's going to end up one day running on someone elses system. Community code needs a level of quality that people trust.

You are already critical of SBIG and QHY quality of code. If you want to be publicly critical of others then it's best your own house is in order first. Lead by example.

Maybe the INDI website should actually start publicly listing vendor specification departures or plain deaf to support requests. We have an opportunity with a project like INDI to not only provide some levels of QA of INDI but to lead by example and shame some vendors into a more positive light. Lifting our own coding standards lifts those around us as well :)

Everyone wins (except of course unit tested code takes long to do because simply it's more work but the long run benefits outweigh the short term "feel good it works now" factor).
The following user(s) said Thank You: Jasem Mutlaq, Derek
Last edit: 7 years 10 months ago by Andy Kirkham.
7 years 10 months ago #8702
The topic has been locked.
Time to create page: 2.399 seconds