Upload
joshua-holmes
View
226
Download
1
Tags:
Embed Size (px)
Citation preview
Simulation Management
Simulation Management Pass or Fail? Managing Simulations Regression Behavioral Models
Pass Or Fail? Goal of testcase is to determine if the DUV passes
or fails given a certain stimulus. What is the determining factor if it passes?
Need proof-positive of successful simulation. Include a termination message in the output log file If message is not present, assume failure.
False-positive situations where testbench does not detect certain situations
Provide error injection to ensure they are caught. Provide logging messages for all activity– could be verbose Use bracket regions for injected errors.
Keep track of success or failures by using a common log package.
Makes end-of-test determination easy!
Managing Simulations Are you simulating the right
model? Configuration Management Verilog Configuration Management VHDL Configuration Management
Configuration Management A configuration is the set of models
used in a simulation. Different from source management (revision
control). Revision control deals with source files, while configuration deals with what models you are using. (behavioral vs. RTL)
For system level tests, could have a mix of both Want an easy way to specify a particular
configuration. Could use a script to submit runs.
Verilog Configuration Management Many ways to include source files
Command line File containing a list of filenames (-f option –
called manifest) Directory to search for missing module
name(s) Name of a file that may contain definitions
of missing modules (-v option) Include directives, based on the +incdir
command line option
Verilog Configuration Management (Cont) Only one that can be source controlled
and reliably reproduced – Manifest Not constrained to just files, can include all
command line options required These can be hierarchical Use relative path names, not absolute.
Assumption that everyone on project is setup similar.
Some simulators have a –F option It prepends path information of the manifest files.
Other alternative is a preprocessing script that does the same.
VHDL Configuration Management VHDL is compiled (Verilog can be
compiled or interpreted) How do you know what you are
simulating? Makefiles Reporting metrics Configuration units
VHDL Configuration Management (Cont) Makefiles
Most effective way If a file is found to be older then its
dependencies, things are recompiled. Can do in submission scripts for
regressions Ensures everything is up-to-date
VHDL Configuration Management (Cont) Reporting Metrics
Environment should report name (and version) of files
Use within testcase run by using asserts Use with a makefile
Configuration Units Use Configuration declarations
Binds architectures to entities Testcase uses which configuration it needs Use configuration of configurations at top
level
Output File Management Simulations create output files
Log file Wave file
When running massively parallel jobs, problem with collisions.
Especially if hard coded names are used. Want to have the ability to create unique names
<tc_name>.<random seed>.<log | wav> Can use scripts to create the names (utilizing the
simulators command line options. Use Verilog/VHDL conventions to create this
Verilog: Use the manifest file and parameters with a script VHDL: Use generics and pass in values from a script.
Regression Regression suite ensures that
modifications to the design remain backward compatible with previously verified functionality. Running Regressions Regression Management
Running Regressions Must be run at regular intervals
Typically nightly Added to a master list called the regression suite Suite is to large to run overnight, can split it up
into different ways: Two lists:
One list to run nightly One to be run over the weekend (includes nightly run)
Include a “fast mode” Could pre-configure things Could enable only certain functions in the stimulus models
Regression Management Ensure your using a version that is
regression certified! Simulation Run Time Automatic classification of
regression.
Simulation Run Time Want to maximize simulation resources!
Minimize wasted cycles due to run away simulations
Use a time bomb. Should go off after enough time has elapsed to
allow for all operations to have completed. Time could be reset on an event
This flags a failure condition Can’t determine is condition is due to a deadlock, run
away, or successful simulation Consider wasted simulation cycles
Run 100 us but test only needs to run to 10 us.
Simulation Run Time (continued) Don’t want to run any longer than necessary
Randomization causes run times to vary Create a BFM for clock generation
It would run a specified time from testcase, then stop clocks (thus shutting down the simulation) – this is the time bomb.
Coordinate this with generators/monitors. If generator is done sending in transactions and
checkers are done validating output, stop simulation! If run time is reached and generator not done sending in
transactions or monitors still have checking to be performed – failure.
Automatic classification of regressions Using an output log scan script, determine the success or
failure of a test. For any given regression suite, a summary could be e-mailed to
everyone on the team. Could be used for status and discussions in team meetings
Should include: Time/Date Design environment (unit name) Testcase name Random Number seed Simulation time Real time (wall clock) System run on Operating System Version Memory in system Paging Space on system
Behavioral Models Benefits of Behavioral Models Behavioral vs. Synthesizeable Models Example of Behavioral Modeling Characteristics of Behavioral Model Modeling Reset Writing Good Behavioral Models Behavioral Models are Faster Demonstrating Equivalence
Benefits of Behavioral Models Audit of the specification
Missing functional details of the specification are uncovered earlier, not during the debug of the RTL.
Development and debug of testbenches in parallel with the RTL
Don’t have to wait for unstable RTL. Since it is behavioral, debug turn around is faster.
When RTL is available, already have a debugged regression suite.
System verification can start earlier, same benefits as above, in addition:
If behavioral is validated as equivalence to RTL, system tests will run faster.
Used as an evaluation tool for customers
Behavioral vs. Synthesizeable Models Behaviorals may not be synthesizeable Behaviorals are not just for test benches Behaviorals describe functionality, not
implementation specifics. Require different mindset
Focus on functionality When implementation affects the
behavioral, you start writing RTL++
Characteristics of Behavioral Model They are partitioned for maintenance
RTL is partitioned for synthesis Usually decided along implementation lines Produces wide and shallow structure
Behavioral is partitioned at the whim of author
Usually along main functional boundaries Avoids one large file Allows multiple people to work on concurrently
Produces narrow and shallow structure
Characteristics of Behavioral Model (Cont) Should not have a clock
Only perform computations when necessary Should not contain FSM
Synchronous design implementation of control logic, not behavioral
Data should remain at a high level of abstraction These structures are designed for ease-of-
use, not implementation Use BFM’s for physical bus connections
Modeling Reset Behaviorals must reset their
variables and state of the model. Partition model with reset in mind.
Have one process monitoring physical signal(s).
Could communicate to all other portions of models using procedures.
Could communicate to all other portions of model using a boolean.
Writing Good Behavioral Models Models that are not done correctly are
thrown out, thus not gaining benefits of behavioral modeling. RTL is used as soon as it is available
Writing good models require specialized skills. Think at a higher level of abstraction Focus on relevant functional details Do not let the testbench dictate what is
functionally relevant
Behavioral Models are Faster Faster to write
No implementation constraints (timing, synthesis) Fewer lines of code – Only have to worry about
function. Faster to debug
Fewer statements -> fewer bugs Faster to simulate
Not sensitive to every clock cycle, just to what is pertinent for the function
Faster to bring to “market” Since everything is faster, can start using this at a
system level test earlier than the RLT will be available.
Cost of Behavioral Models Requires additional resources
Someone has to write them! May mean RTL is delayed (if a designer
does it as a deliverable) Can hire additional resources
Maintenance requires additional efforts To support the model, needs ongoing
maintenance Architecture changes, model must reflect it.
Demonstrating Equivalence How do you know RTL meets the function
that was described in the behavioral? Can’t “prove” using mathematics Can’t use equivalence tools. They can’t take
behavioral models as inputs. Use same test suite that was used for
behaviorals against RTL. Most applicable to black-box testing Most system level tests are either black-box
or gray box.