Basically we rely on the following overall architecture (crucial keywords bolded):
- every machine under test has appropriate probe installed
- driver generates random operations on installed system, probe reports any failures (assertions, crashes, warnings, … = events) to central server
- central server application aggregates results and shows reports used by developers
We assume new software versions are installed by Continuous Integration system automatically, so results from tests are up-to-date with current development (with <1 hour delay). Reports present software version under test, so you can check if the bug has occurred in latest software version or not.
We intend to close the loop much faster than QA team does:
In recent method implementation (embedded device development) we’ve been able to close the loop below 1 hour: the build itself, deployment and finally tests restart then first outcomes. It allows to do regression analysis with very good granularity. Typically QA checks only selected versions of software deployed few times a week, so the loop is longer than one day.