Applying Model-Based Verification in Automotive ASIC Development
By Aswini Tata, Sanjay Chatterjee, and Kamel Belhous, Allegro MicroSystems, and Surekha Kollepara, Cyient
As requirements from customers continue to grow more demanding and delivery schedules shrink, the model-based verification approach we have adopted is helping our teams deliver more sophisticated algorithms and systems with a reduced time to market.
In semiconductor companies that serve the automotive industry, engineering teams are being asked to deliver increasingly complex systems on tight time schedules. Meeting these tight deadlines comes with difficulties related to late-stage testing. For example, waiting until RTL is available to start functional verification runs the risk of cost overruns and delivery delays. Design defects and requirements issues found at this stage are much more expensive and difficult to remediate, and teams waste valuable time debugging unrealistic scenarios. In this environment, shift-left testing, in which verification activities are conducted as early as possible in the development cycle, addresses such late-stage testing challenges.
Some of our team at Allegro MicroSystems has adopted a new shift-left approach using Model-Based Design for DSP blocks that incorporates the generation of HDL code for mixed-signal ASICs as well as the generation of Universal Verification Methodology (UVM) testbenches for RTL-level verification. With this model-based verification approach, we benefit from early functional verification in Simulink® and a system-level view of the design that facilitates collaboration between systems engineers and verification teams (Figure 1). Early model verification leads to better quality HDL because high-level design and requirements issues are found and eliminated before code generation. We expect this early bug detection to save two months of verification effort. We further benefit from rigorous testing of the HDL implementation in our UVM environment and the reuse of models and test assets across projects.
From Requirements to Executable Specification to Implementation
In a traditional development workflow, the system engineer authors text-based requirements that are used by the digital design team (system architects and RTL engineers) to produce the specification and, from that, the RTL design. Our group, the digital verification team, would then be responsible for creating a test plan based on the specification and functional verification to ensure the RTL design conforms to the specification. In this workflow, when a defect is detected—typically late in the development lifecycle—it can take a long time to determine if the root cause of the defect lies with the implementation, the specification, or the original requirements.
With our current approach, the workflow is designed to enable verification of the architecture and requirements much earlier. Once the requirements are defined in Jama Connect® by the systems engineer, the digital design team creates an architecture specification model in Simulink. This model acts as an executable specification of the system. By running simulations with this model, the team performs model-in-the-loop unit and integration testing to validate requirements and verify the architecture (Figure 2). On our first project using this approach, these simulations helped us identify several issues, including a scenario in which conditional statements contradicted one another, leading to invalid output for some input stimuli combinations.
In the next stage of development, the team translates the architecture model into a more “hardware-friendly” implementation model in preparation for code generation with HDL Coder™. This can include, for example, converting algorithms from floating point to fixed point, or switching from frame-based processing to streaming.
Testbench Models and Verification in Simulink
As the digital design team builds components in the implementation model, Simulink testbench models for those components are developed in parallel. Each Simulink testbench model contains the following subsystems corresponding to UVM components: sequence, driver, DUT, predictor, monitor, and scoreboard (Figure 3). Only the sequence, DUT, and scoreboard subsystems are required for testbench generation with HDL Verifier™.
The sequence subsystem generates stimuli for the device under test, or DUT subsystem, which in this workflow is the implementation model created in Simulink. This subsystem creates and randomizes stimuli using MATLAB® code and Simulink blocks, including the Test Sequence block. Its seed input is used to initialize the MATLAB random number generator. The scoreboard subsystem collects the output of the DUT and compares it against expected output via Assertion for DPI-C blocks, which are used to generate DPI-C components that contain SystemVerilog assertions (Figure 4). (The SystemVerilog Direct Programming Interface [DPI] is an interface between SystemVerilog and a foreign programming language such as C. HDL Verifier can generate DPI-C components consisting of C code with SystemVerilog wrapper code; the resulting DPI-C components can then be executed by HDL simulators that support SystemVerilog.)
Running simulations in Simulink with the testbench model along with various model verification and validation tools, such as Simulink Test™, further validates the implementation model against requirements. We pull the results of these simulations from Simulink back into Jama to facilitate requirements-based testing. Additionally, Simulink Design Verifier™ can be used to identify dead code logic in the mode.
Code Generation, Testbench Generation, and HDL Simulation and Testing
Once the implementation model and testbench models have been built and used to complete the design verification phase of the workflow in Simulink, we begin the next phase: HDL code generation and verification. In this phase, we use HDL Coder to generate synthesizable HDL code from implementation model components. We also use HDL Verifier—specifically the uvmbuild
function from the ASIC Testbench add-on—to generate complete UVM testbenches from the Simulink testbench models (Figure 5). (Another function included in ASIC Testbench, dpigen
, generates DPI-C components from MATLAB code or Simulink models for design teams who aren’t using UVM environments.)
Using the generated testbench, we then run tests in our UVM environment against the code generated from the implementation model using a digital simulator, such as Cadence® Xcelium™ simulators (Figure 6). We extend the generated UVM testbench as needed to add more complex constrained randomizations, assertion checkers, and SystemVerilog cover groups for functional coverage analysis. When a test fails in the UVM environment, we use the seed and memory configuration from the failed test to reproduce the failure conditions in a Simulink simulation, which makes it much easier for the design engineer to debug and remediate the failure, compared to debugging directly at the HDL level.
Next Steps
As requirements from customers continue to grow more demanding and delivery schedules shrink, the model-based verification approach we have adopted is helping our teams deliver more sophisticated algorithms and systems with a reduced time to market. We are planning to extend this shift-left concept to other projects, where we expect the reuse of models and associated Simulink test environments by the verification team to save an additional two months of development effort on medium-complexity projects at Allegro. Going forward, we are also exploring opportunities for our systems engineering teams and our customers to reuse models.
Published 2024