Model requirements, generate test cases, compare code, and model outputs
Model-based testing is a systematic method to generate test cases from models of system requirements. It allows you to evaluate requirements independent of algorithm design and development.
Model-based testing involves:
- Creating a model of system requirements for testing
- Generating test data from this requirements-model representation
- Verifying your design algorithm with generated test cases
In model-based testing, you use requirement models to generate test cases to verify your design. This process also helps automate other verification tasks and streamlines the review process by linking test cases and verification objectives to high-level test requirements. With Requirements Toolbox™ you can author requirements directly within Simulink® or exchange requirements with third-party requirements tools. You can establish and analyze traceability between requirements, design, generated code, and test.
Using Simulink Test™ you manage the test cases and systematically execute them to confirm that your design meets requirements. To increase the quality of generated test cases beyond traditional stochastic and heuristic methods, you can generate tests with Simulink Design Verifier™, which uses formal analysis techniques. With Simulink Coverage™ you can use model and code coverage metrics to assess the completeness of your model-based testing efforts. These metrics can identify missing requirements and unintended functionality.
To incorporate hardware and production code into model-based testing, you can compare dynamic outputs of simulation results with data collected through testing in software-in-the-loop (SIL), processor-in-the-loop (PIL), or in real-time with hardware-in-the-loop (HIL). You can use Simulink Test to help manage this equivalence testing workflow.
Examples and How To
See also: formal verification, requirements traceability, Simulink Design Verifier, Simulink Coverage, Requirements Toolbox, Simulink Test