D45 Model-based testing
Overview
Model-based testing (MBT) is an approach from software engineering that uses formal models as basis for automatic generation of test cases. A test case is defined as a sequence of actions (or events, or triggers) together with corresponding test data that can be executed against a System Under Test (SUT). In DEPLOY, we investigated a version of MBT using Event-B models as test models from which test cases are generated. The main purpose was to provide the deployment partners with an MBT method together with a Rodin plug-in that allows generation of test cases satisfying different coverage criteria (e.g. covering of all events in a model or covering paths to a set of target global states). This includes the generation of appropriate test data that satisfy the guards of the single test steps.
The partners involved in MBT for Event-B were University of Pitesti and University of Düsseldorf on the academic side and SAP on the deployment partners side.
In the last year of DEPLOY, the work done on MBT proceeded according to the plan (summarized in the previous deliverable D32), i.e. generating test cases and test data using constraint-solving and evolutionary techniques. Moreover, new ideas involving a technique called "automata learning" were investigated. The proposed methods were implemented, a new Rodin plugin, the MBT plug-in, was publicly released, and experiments were performed using the Event-B models available in the DEPLOY model repositories.
Motivations
MBT is a very active research area using several types of test models and delivering promising results. Before our investigations, the Event-B approach and framework did not support MBT, even though the MBT prerequisites were satisfied. Moreover, deployment partners such as SAP, showed interest into MBT as a good candidate for adoption of formal methods in practice. A first attempt to MBT was based on the model-checking and state exploration capabilities of the ProB plugin, by exploring the explicit state of an Event-B model and outputting event sequences as test cases. However, the attempt worked only for small domains of the variables and faced the classical state space explosion problem for larger domains. This motivated the search for alternative solutions as presented below.
Choices / Decisions
Over the reporting period of the last year, the following three different solutions were defined, implemented, and tested:
- Using abstraction and constraint-solving: To avoid the state space explosion due to the large bounds of the variables, this approach ignores these values in the first step and uses the ProB model-checker only to generate abstract test cases satisfying the given coverage criteria. However, these paths may be infeasible in the concrete model due to the data constraints along the path. The solution is to represent the intermediate states of the path as existentially quantified variables. The whole path is thus represented as a single predicate consisting of the guards and before-after predicates of its events. ProB's improved constraint solver is then used to validate the path feasibility and find appropriate test data satisfying the constraints.
- Using model-learning: Event-B models are essentially abstract state machines. However, their states are not given explicitly; instead they can be implicitly derived from the values of the model variables. Since the notion of state is at the heart of MBT, we provide a model-learning approach that uses the notion of cover automata to iteratively construct a subset of a state space together with an associated test suite. The iterative nature of the algorithm fits well with the notion of Event-B refinement. It can also be adapted to work with decomposed Event-B models.
- Using search-based techniques: We also investigated meta-heuristics search approaches, including evolutionary algorithms, in the attempt to tackle large variable domains and state spaces in the search of test data. We worked on two applications: first, generation of test data for a given sequence of events (test case) and second, a generalisation to generation of test suites satisfying different coverage criteria.
Furthermore, the output of the above procedures was complemented by test suite optimisations (based on evolutionary techniques) according to different test coverage criteria. This was motivated by the fact that the test generators sometimes produces too many feasible tests. This may be useful for certain types of intensive testing (e.g. conformance testing), but in other scenarios it is more appropriate to generate smaller test suites for lighter coverage criteria (e.g. each event is executed at least once).
Available Documentation
MBT plug-in wiki page: http://wiki.event-b.org/index.php/MBT_plugin
Associated deliverables containing technical information on the MBT work:
- DEPLOY deliverable D44 - see chapter 10 for technical details on the constraint-solving approach
- DEPLOY deliverable D54 - technical details on the search-based and model-learning approaches
- DEPLOY deliverable D32 - see chapter 10 for backward reference
Papers:
- Ionut Dinca, Alin Stefanescu, Florentin Ipate, Raluca Lefticaru, and Cristina Tudose. Test data generation for Event-B models using genetic algorithms. In Proc. of 2nd International Conference on Software Engineering and Computer Systems (ICSECS’11), volume 181 of CCIS, pages 76–90. Springer, 2011.
- Alin Stefanescu, Florentin Ipate, Raluca Lefticaru, and Cristina Tudose. Towards search-based testing for Event-B models. In Proc. of 4th Workshop on Search-Based Software Testing (SBST’11), pages 194–197. IEEE, 2011.
- Ionut Dinca. Multi-objective test suite optimization for Event-B models. In Proc. of Int. Conf. on Informatics Engineering and Information Science (ICIEIS’11), volume 251 of CCIS series (Communications in Computer and Information Science), pages 551–565. Springer, 2011.
- Florentin Ipate, Ionut Dinca, and Alin Stefanescu. Model learning and test generation using cover automata. Submitted to IEEE Transactions on Software Engineering, January 2012.
- Ionut Dinca, Florentin Ipate, Laurentiu Mierla, and Alin Stefanescu. Learn and test for Event-B - a Rodin plugin. Accepted at ABZ'12, LNCS, Springer, 2012.
- Ionut Dinca, Florentin Ipate, and Alin Stefanescu. Learning and test generation for Event-B decomposition. Submitted to ISoLA'12, LNCS, Springer, 2012.
Status
We consider that the MBT work has reached a rather mature level at the end of the project, by exploring different techniques and applying them successfully to the available Event-B models from the DEPLOY repository. Moreover, SAP has experimented with one of the methods (based on abstraction and constraint solving) via a web service in their testing infrastructure. The dissemination of the results was continuous by presentations of the results at different conferences, workshops, or research seminars and as conference and journal papers.