Basically we rely on the following overall architecture (crucial keywords bolded):

  • every machine under test has appropriate probe installed
  • driver generates random operations on installed system, probe┬áreports any failures (assertions, crashes, warnings, … = events) to central server
  • central server application aggregates results and shows reports used by developers

architecture-generalWe assume new software versions are installed by Continuous Integration system automatically, so results from tests are up-to-date with current development (with <1 hour delay). Reports present software version under test, so you can check if the bug has occurred in latest software version or not.

We intend to close the loop much faster than QA team does:

architecture-flowIn recent method implementation (embedded device development) we’ve been able to close the loop below 1 hour: the build itself, deployment and finally tests restart then first outcomes. It allows to do regression analysis with very good granularity. Typically QA checks only selected versions of software deployed few times a week, so the loop is longer than one day.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>