123
© Vijay, PSU, 2000 CSE497A Lecture 2. CSE497A Lecture 2.1 CSE 497A Spring 2002 Functional Verification Lecture 2/3 Vijaykrishnan Narayanan

© Vijay, PSU, 2000 CSE497A Lecture 2.1 CSE 497A Spring 2002 Functional Verification Lecture 2/3 Vijaykrishnan Narayanan

Embed Size (px)

Citation preview

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.11

CSE 497ASpring 2002

Functional VerificationLecture 2/3

Vijaykrishnan Narayanan

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.22

Course Administration

Instructor Vijay Narayanan ([email protected]) 229 Pond Lab Office Hours: T 10:00-11:00; W 1:00-2:15 Tool Support Jooheung Lee ([email protected]) TA TBA Laboratory 101 Pond Lab Materials www.cse.psu.edu/~vijay/verify/ Texts

» J. Bergeron. Writing Testbenches: Functional Verification of HDL Models. Kluwer Academic Publishers.

» Class notes - on the web

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.33

Grading Grade breakdown

» Midterm Exam: 20% » Final Exam: 25% » Verification Projects (~4): 40% » Homework (~3) 15%

No late homeworks/project reports will be accepted Grades will be posted on course home page

» Written/email request for changes to grades

» April 25 deadline to correct scores

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.44

Secret of Verification(Verification Mindset)

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.55

The Art of Verification

Two simple questions

Am I driving all possible input scenarios?

How will I know when it fails?

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.66

Thou shalt stress thine logic harder than it will ever be

stressed again

Thou shalt place checking upon all

things

Thou shalt not move onto a higher platform until the

bug rate has dropped off

Three Simulation Commandments

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.77

General Simulation Environment

OutputSimulator

Event simulatorCycle simulatorEmulator

Testcase results

TestcaseTestcase

Driver

C/C++HDL TestbenchesSpecman eSynopsis' VERA

Compiler(not always required)

Design Source

Model

VHDLVerilog

Event Simulation compilerCycle simulation compiler....Emulator Compiler

EnvironmentData

InitializationRun-time requirements

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.88

Transfer Testcase

Configure Environment

Run Foreground Simulation

Release Environment

Regress Fails

Redirect Defect

Verify Defect Fix

Create Defect

Answer Defect

Specify Batch

Simulation

Monitor Batch

Simulation

Debug Environment

View Trace

Release Model

Define Project Goals

Project Status Report

Logic Designer Environment Developer

Model Builder Project Manager

Verification Engineer

Run Background Simulation

Debug Fail

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.99

Some lingo Facilities: a general term for named wires (or

signals) and latches. Facilities feed gates (and/or/nand/nor/invert, etc) which feed other facilities.

EDA: Engineering Design Automation--Tool vendors.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1010

More lingo Behavioral: Code written to perform the function of

logic on the interface of the design-under-test Macro: 1. A behavioral 2. A piece of logic Driver: Code written to manipulate the inputs of the

design-under-test. The driver understands the interface protocols.

Checker: Code written to verify the outputs of the design-under-test. A checker may have some knowledge of what the driver has done. A check must also verify interface protocol compliance.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1111

Still more lingo

Snoop/Monitor: Code that watches interfaces or internal signals to help the checkers perform correctly. Also used to help drivers be more devious.

Architecture: Design criteria as seen by the customer. The design's architecture is specified in documents (e.g. POPS, Book 4, Infiniband, etc), and the design must be compliant with this specification.

Microarchitecture: The design's implementation. Microarchitecture refers to the constructs that are used in the design, such as pipelines, caches, etc.

Escape: An error that appears in test floor escaping verification

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1212

Typical Verification diagram

DUT(bridge chip)

Protocol

Packet

Sequence

Conv-ersation Errors

Struct: Header Payload checking

gen packetdrive packetpost packet

Bus

Scoreboard

xlatepredict

Checking framework

Stimulus Devicetypes FSMslatency conditionsaddress transactionssequences transitions

Coverage Data

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1313

Verification Cycle

Create TestplanDevelop environment

Debug hardware

Regression

FabricationHardware debug

Escape Analysis

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1414

Verification Testplan Team leaders work with design leaders to create a

verification testplan. The testplan includes:ScheduleSpecific tests and methods by simulation levelRequired toolsInput criteriaCompletion criteriaWhat is expected to be found with each test/levelWhat's not covered by each test/level

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1515

Verification is a process used to demonstrate the functional correctness of

a design. Also called logic verification or simulation.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1616

Reconvergence Model

Conceptual representation of the verification process Most important question

– What are you verifying?

Verification

Transformation

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1717

What is a testbench?

A “testbench” usually refers to the code used to create a pre-determined input sequence to a design, then optionally observe the response.

» Generic term used differently across industry» Always refers to a testcase» Most commonly (and appropriately), a testbench refers to code written

(VHDL, Verilog, etc) at the top level of the hierarchy. The testbench is often simple, but may have some elements of randomness

Completely closed system » No inputs or outputs» effectively a model of the universe as far as the design is concerned.

Verification challenge: » What input patterns to supply to the Design Under Verification and what is

expected for the output for a properly working design

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1818

Show Multiplexer Testbench

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.1919

Importance of Verification

Most books focus on syntax, semantics and RTL subset» Given the amount of literature on writing synthesizeable code vs..

writing verification testbenches, one would think that the former is a more daunting task. Experience proves otherwise.

70% of design effort goes to verification» Properly staffed design teams have dedicated verification engineers.» Verification Engineers usually outweigh designers 2-1

80% of all written code is in the verification environment

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2020

The Line Delete Escape Escape: A problem that is found on the test floor and therefore has escaped the verification process

The Line Delete escape was a problem on the H2 machine S/390 Bipolar, 1991Escape shows example of how a verification

engineer needs to think

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2121

The Line Delete Escape(pg 2)

Line Delete is a method of circumventing bad cells of a large memory array or cache arrayAn array mapping allows for removal of

defective cells for usable space

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2222

The Line Delete Escape(pg 3)

If a line in an array has multiple bad bits (a single bit usually goes unnoticed due to ECC-error correction codes), the line can be taken "out of service".In the array pictured, row 05 has a bad congruence class entry.

.

.

.

05

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2323

The Line Delete Escape(pg 4)

Data enters ECC creation logic prior to storage into the array. When read out, the ECC logic corrects single bit errors and tags Uncorrectable Errors (UEs), and increments a counter corresponding to the row and congruence class.

.

.

.

05

Data in

ECC Logic Counters

ECC Logic

Data out

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2424

The Line Delete Escape(pg 5)

When a preset threshhold of UEs are detected from a array cell, the service controller is informed that a line delete operation is needed.

.

.

.

05

Data in

ECC Logic Counters

ECC Logic

Data outThreshhold

Service Controller

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2525

The Line Delete Escape(pg 6)

The Service controller can update the configuration registers, ordering a line delete to occur. When the configuration registers are written, the line delete controls are engaged and writes to row 5, congruence class 'C' cease.

However, because three other cells remain good in this congruence class, the sole repercussion of the line delete is a slight decline in performance. .

.

.

05

ECC Logic Counters

Data in

ECC Logic

Data outThreshhold

Service Controller

Line delete controlStorage

Controller configuration

registers

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2626

The Line Delete Escape(pg 7)

.

.

.

05

ECC Logic Counters

Data in

ECC Logic

Data outThreshhold

Service Controller

Line delete controlStorage

Controller configuration

registers

How would we test this logic?What must occur in the testcase?What checking must we implement?

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2727

Verification is on critical path

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2828

Want to minimize Verification Time!

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.2929

Ways to reduce verification time

Verification can be reduced through:» Parallelism: Add more resources» Abstraction: Higher level of abstraction (i.e. C vs..

Assembly)– Beware though – this means a reduction in control

» Automation: Tools to automate standard processes

– Requires standard processes– Not all processes can be automated

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3030

Hierarchical DesignSystem

Chip

Unit

Macro

...

Allows design team to break system down into logical and comprehendable components. Also allows for repeatable components.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3333

Ways to reduce verification time

Verification can be reduced through:» Parallelism: Add more resources» Abstraction: Higher level of abstraction (i.e. C vs..

Assembly)– Beware though – this means a reduction in

control/additional training– Vera, e are examples of verification languages

» Automation: Tools to automate standard processes

– Requires standard processes– Not all processes can be automated

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3434

Human Factor in Verification Process

An individual (or group of individuals) must interpret specification and transform into correct function.

Specification

Interpre-tation

RTL Coding

Verification

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3535

Need for Independent Verification

The verification engineer should not be an individual who participated in logic design of the DUT

Blinders: If a designer didn't think of a failing scenario when creating the logic, how will he/she create a test for that case?

However, a designer should do some verification on his/her design before exposing it to the verification team

Independent Verification Engineer needs to understand the intended function and the interface protocols, but not necessarily the implementation

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3636

Verification Do's and Don'ts DO:

Talk to designers about the function and understand the design first, but then

Try to think of situations the designer might have missed

Focus on exotic scenarios and situations– e.g try to fill all queues while the design is

done in a way to avoid any buffer full conditions

Focus on multiple events at the same time

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3737

Verification Do's and Don'ts (continued)

Try everything that is not explicitly forbiddenSpend time thinking about all the pieces that you

need to verifyTalk to "other" designers about the signals that

interface to your design-under-test Don't:

Rely on the designer's word for input/output specification

Allow RIT Criteria to bend for sake of schedule

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3838

Ways to reduce human-introduced errors

Automation» Take human intervention out of the process

Poka-Yoka» Make human intervention fool-proof

Redundancy» Have two individuals (or groups) check each

others work

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.3939

Automation

Obvious way to eliminate human-introduced errors – take the human out.» Good in concept» Reality dictates that this is not feasible

– Processes are not defined well enough– Processes require human ingenuity and creativity

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4040

Poka-Yoka

Term coined in Total Quality Management circles

Means to “mistake-proof” the human intervention

Typically the last step in complete automation Same pitfalls as automation – verification

remains an art, it does not yield itself to well-defined steps.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4141

Redundancy

Duplicate every transformation» Every transformation made by a human is either:

– Verified by another individual– Two complete and separate transformations are

performed with each outcome compared to verify that both produced the same or equivalent result

Simplest Most costly, but still cheaper than redesign

and replacement of a defective product Designer should NOT be in charge of

verification!

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4242

What is being verified?

Choosing a common origin and reconvergence points determines what is being verified and what type of method to use.

Following types of verification all have different origin and reconvergence points:» Formal Verification» Model Checking» Functional Verification» Testbench Generators

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4343

Formal Verification

Once the end points of formal verification reconvergence paths are understood, then you know exactly what is being verified.

2 Types of Formal:» Equivalence» Model Checking

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4444

Equivalence Checking

Compares two models to see if equivalence» Netlists before and after modifications» Netlist and RTL code (verify synthesis)» RTL and RTL (HDL modificiations)» Post Synthesis Gates to Post PD Gates

– Adding of scan latches, clock tree buffers

Proves mathematically that the origin and output are logically equivalent» Compares boolean and sequential logic functions, not

mapping of the functions to a specific technology

Why do verification of an automated synthesis tool?

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4545

Equivalence Reconvergence Model

RTL Gates

Check

Synthesis

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4646

Model Checking

Form of formal verification Characteristics of a design are formally proven or

disproved» Find unreachable states of a state machine» If deadlock conditions will occur» Example: If ALE will be asserted, either DTACK or ABORT

signal will be asserted

Looks for generic problems or violations of user defined rules about the behavior of the design» Knowing which assertions to prove is the major difficulty

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4747

Steps in Model Checking

Model the system implementation using a finite state machine

The desired behavior as a set of temporal-logic formulas

Model checking algorithm scans all possible states and execution paths in an attempt to find a counter-example to the formulas

Check these rules» Prove that all states are reachable» Prove the absence of deadlocks

Unlike simulation-based verification, no test cases are required

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4848

Problems with Model Checking

Automatic verification becomes hard with increasing number of states

10^100 states (larger than number of protons in the universe) but still does not go far beyond 300 bits of state variables.

Absurdly small for millions of transistors in current microprocessors

Symbolic model checking explores a larger set of states concurrently.

IBM Rulebase (Feb 7) is a symbolic Model Checking tool

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.4949

Model Checking Reconvergence Model

SpecificationRTL

Assertions

RTL

Interpretation ModelChecking

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5050

Functional Verification

Verifies design intent» Without, one must trust that the transformation of

a specification to RTL was performed correctly

Prove presence of bugs, but cannot prove their absence

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5151

Functional Reconvergence Model

SpecificationRTL

FunctionalVerification

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5252

Testbench Generators

Tool to generate stimulus to exercise code or expose bugs

Designer input is still required RTL code is the origin and there is no

reconvergence point Verification engineer is left to determine if the

testbench applies valid stimulus If used with parameters, can control the

generator in order to focus the testbenches on more specific scenarios

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5353

Testbench Generation Reconvergence Model

RTLTestbench

TestbenchGeneration

Code Coverage/Proof

Metrics

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5454

Functional Verification Approaches

Black-Box Approach White-Box Approach Grey-Box Approach

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5555

Black-Box

Some piece of logic design written in

VHDL

Inputs Outputs

• The black box has inputs, outputs, and performs some function.• The function may be well documented...or not.• To verify a black box, you need to understand the function and be able to predict the outputs based on the inputs.• The black box can be a full system, a chip, a unit of a chip, or a single macro.•Can start early

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5656

White-Box

White box verification means that the internal facilities are visible and utilized by the testbench stimulus.» Quickly setup interesting cases» Tightly integrated with implementation» Changes with implementation

Examples: Unit/Module level verification

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5757

Grey-Box

Grey box verification means that a limited number of facilities are utilized in a mostly black-box environment.

Example: Most environments! Prediction of correct results on the interface is occasionally impossible without viewing an internal signal.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5858

Perfect Verification

To fully verify a black-box, you must show that the logic works correctly for all combinations of inputs.

This entails:» Driving all permutations on the input lines» Checking for proper results in all cases

Full verification is not practical on large pieces of designs, but the principles are valid across all verification.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.5959

Reality Check Macro verification across an entire system is not

feasible for the businessThere may be over 400 macros on a chip, which

would require about 200 verification engineers!That number of skilled verification engineers does not

existThe business can't support the development expense

Verification Leaders must make reasonable trade-offs

Concentrate on Unit levelDesigner level on riskiest macros

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6060

Typical Bug rates per level

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6161

Cost of Verification

Necessary Evil» Always takes too long and costs too much» Verification does not generate revenue

Yet indispensable» To create revenue, design must be functionally

correct and provide benefits to customer» Proper functional verification demonstrates

trustworthiness of the design

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6262

Verification And Design Reuse

Won’t use what you don’t trust. How to trust it?

» Verify It.

For reuse, designs must be verified with more strict requirements» All claims, possible combinations and uses must

be verified.» Not just how it is used in a specific environment.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6363

When is Verification Done?

Never truly done on complex designs Verification can only show presence of errors,

not their absence Given enough time, errors will be uncovered Question – Is the error likely to be severe

enough to warrant the effort spent to find the error?

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6464

When is Verification Done? (Cont)

Verification is similar to statistical hypothesis.– Hypothesis – Is the design functionally correct?

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6565

Hypothesis Matrix

Errors No Errors

Bad Design Type II

(False Positive)

Good Design Type I

(False Negative)

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6666

Tape-Out Criteria Checklist of items that must be completed before tape-outVerification items, along with

Physical/Circuit design criteria, etcVerification criteria is based on

– Function tested– Bug rates– Coverage data– Clean regression– Time to market

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6767

Verification VS. Test

Two often confused Purpose of test is to verify that the design

was manufactured properly Verification is to ensure that the design meets

the functionality intent

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.6868

Verification and Test Reconvergence Model

SpecificationSilicon

Verification

HW Design

Net list

Test

Fabrication

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7171

Verification Tools

Automation improves the efficiency and reliability of the verification process

Some tools, such as a simulator, are essential. Others automate tedious tasks and increase confidence in the outcome.

It is not necessary to use all the tools.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7272

Verification Tools

Improve efficiency e.g. spell checker Improve reliability Automate portion of the verification process Some tools such as simulators are essential Some tools automate the most tedious tasks

and increase the confidence in outcome» Code coverage tool» Linting tols» Help ensure that a Type II mistake does not occur

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7373

Verification Tools

Linting Tools Simulators Third Party Models Waveform Viewers Code Coverage Verification Languages (Non-RTL) Revision Control Issue Tracking Metrics

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7474

Linting Tools

UNIX C utility program» Parses a C program» Reports questionable uses» Identifies common mistakes» Makes finding those mistakes quick and easy

Lint identified problems» Mismatched types» Misatched argument in function calls – either

number of or type

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7575

The UNIX C lint program

Attempts to detect features in C program files that are likely to be bugs, non-portabe or wasteful

Checks type usage more strictly than a compiler Checks for

» Unreachable statements» Loops not entered at top» Variables declared but not used» Logical expressions whse value is constant» Functions that return values in some places but not others» Functions called with a varying number or type of args» Functions whose value is not used

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7676

Advantages of Lint Tools

Know about problems prior to execution (simulation for VHDL code)» Checks are entirely static

Do not require stimulus Do not need to know expected output Can be used to enforce coding guidelines

and naming conventions

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7777

Pitfalls

Can only find problems that can be statically deduced

Cannot determine if algorithm is correct Cannot determine if dataflow is correct Are often too paranoid – err of side of caution

– Type I/II??errors – good design but error reported – filtering output

Should check and fix problems as you go – don’t wait till entire model/code is complete

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.7979

Linting VHDL source code

VHDL is strongly typed – does not need linting as much as Verilog (Can assign bit vectors of different lenghts to each other)

An area of common problems is use of STD_LOGIC

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8080

VHDL Example

Library ieee;Use ieee.std_logic_1164.all;Entity my_entity is

port (my_input: in std_logic)End my_entity

Architecture sample of my_entity issignal s1: std_logic;

signal sl: std_logic;Begin

stat1: s1 <= my_input;stat2: s1 <= not my_inputs;

End sample;

Warning: file x.vhd: Signal “s1” is multiply definedWarning: file x.vhd: Signal “sl”Has no drivers

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8181

Naming Conventions

Use a naming convention for signals with multiple drivers

Multiple driven signals will give warning messages but with a naming convention can be ignored

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8282

Cadence VHDL Lint Tool

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8383

HAL Checks

Some of the classes of errors that the HAL tool checks for include:Interface Inconsistency;Unconnected ports; Incorrect number or type of task/function arguments Incorrect signal assignments to input ports; Unused or Undriven Variables; Undriven primary output; Unused task/function/parameters; Event variables that are never triggered; 2-State versus 4-State Issues; Conditional expressions that use x/z incorrectly; Case equality (===) that is treated as equality (==);Incorrect assignment of x/z values; Expression Inconsistency; Unequal operand lengths; Real/time values that are used in expressions; Incorrect rounding/truncation; Case Statement Inconsistency; Case expressions that contain x or z logic ; Case expressions that are out of range; Correct use of parallel_case and full_case constructs; Range and Index Errors; Single-bit memory words; Bit/part selects that are out of range; Ranged ports that are re-declared

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8484

Code reviews

Objective: Identify functional and coding style errors prior to functional verification and simulation

Source code is reviewed by one or more reviewers

Goal: Identify problems with code that an automatd tool would not identify

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8585

Simulators

Simulators are the most common and familiar verification tool

Simulation alone is never the goal of an industrial project

Simulators attempt to create an artificial universe that mimics the environment that the real design will see

Only a approximation of reality» Digital values n std logic have 9 values» Reality – signal is a continuous value between GND and

Vdd

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8686

Simlators

Execute a description of the design Description limited to well defined language

with precise semantics Simulators are not a static tool – require the

user to set up an environment in which the design will find itself – this setup is often called

Provides inputs and monitors resultstestbench

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8787

Simulators

Simulation outputs are validated externally against design intent (specification)

Two types:» Event based» Cycle based

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8888

Event Based Simulators

Event based simulators are driven based on events

An attempt to increase the simulated time per unit of wall time

Outputs are a function of inputs» The outputs change only when the inputs do» Moves simulation time ahead to the next time at

which something occurs» The event is the input changing» This event causes simulator to re-evaluate and

calculate new output

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.8989

Cycle Based Simulators

Simulation is based on clock-cycles not events» All combinational functions collapsed into a single operation

Cycle based simulators contain no timing and delay information» Assumes entire design meets setup and holdtime for all FF’s» Timing is usually verified by static timing analyzer

Can handle only synchronous circuits» Only ‘event’ is active edge of clock» All other inputs are aligned with clock (cannot handle

asynchronous events)» Moore machine state changes whenver clk changes; mealy

machines they also depend on inputs which can change asynchronously

Much faster than event based

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9090

Types of Simulators(con't)

Simulation FarmMultiple computers are used in parallel for simulation

Acceleration Engines/EmulatorsQuickturn, IKOS, AXIS.....Custom designed for simulation speed (parallelized)Accel vs. Emulation

– True emulation connects to some real, in-line hardware– Real software eliminates need for special testcase

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9191

Speed compare Influencing Factors:

Hardware Platform– Frequency, Memory, ...

Model content– Size, Activity, ...

Interaction with Environment

Model load timeTestpattern Network utilization

Event Simulator 1

Cycle Simulator 20

Event driven cycle Simulator

50

Acceleration 1000

Emulation 100000

Relative Speed of different Simulators

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9292

Speed - What is fast? Cycle Sim for one processor chip

1 sec realtime = 6 month

Sim Farm with a few hundred computers 1 sec realtime = ~ 1 day

Accelerator/Emulator 1 sec realtime = ~ 1 hour

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9393

Co-Simulation

Co-simulators are combination of event, cycle, and other simulators (acceleration, emulation)

– Both simulators progress along time in lockstep fashion

Performance is decreased due to inter tool communication.

Ambiguities arise during translation from one simulator to the other.» Verilog’s 128 possible states to VHDL’s 9» Analog’s current and voltage into digital’s logic

value and strength.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9494

Third Party Models

Many designs use off the shelf parts To verify such a design, must obtain a model

to these parts Often must get the model from a 3rd party Most 3rd party models are provided as

compiled binary models Why buy 3rd party models?

» Engineering resources» Quality (especially in the area of system timing)

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9595

Hardware Modelers

Are for modeling new hardware. Some hardware may be too new for models to available» Example: In 2000 still could not get a model of the Pentium

III Sometimes cannot simulate enough of a model in an acceptable

period of time

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9696

Hardware Modelers (cont)

Hardware modeler features» Small box that connects to network that contains a real copy

of the physical chip» Rest of HDL model provides inputs to the chip and obtains

the chips output to return to your model

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9797

Waveform Viewers

Lets you view transitions on multiple signals over time

The most common of verification tools Waveform can be saved in a trace file In verification

» need to know expected output and whenever the

simulated output is not as expected – both the signal value and the signal timing

» use the testbench to compare the model output with the expected

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9898

Coverage Coverage techniques give feedback on how much the testcase or driver is exercising the logicCoverage makes no claim on proper

checking All coverage techniques monitor the design during simulation and collect information about desired facilities or relationships between facilities

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.9999

Measure the "quality" of a set of tests Supplement test specifications by pointing to

untested areas Help create regression suites Provide a stopping criteria for unit testing Better understanding of the design

Coverage Goals

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.100100

Coverage Techniques People use coverage for multiple reasons

Designer wants to know how much of his/her macro is exercised

Unit/Chip leader wants to know if relationships between state machine/microarchitectural components have been exercised

Sim team wants to know if areas of past escapes are being tested

Program manager wants feedback on overall quality of verification effort

Sim team can use coverage to tune regression buckets

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.101101

Coverage Techniques Coverage methods include:

Line-by-line coverage– Has each line of VHDL been exercised?

(If/Then/Else, Cases, states, etc)Microarchitectural cross products

– Allows for multiple cycle relationships– Coverage models can be large or small

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.102102

Code Coverage

A technique that has been used in software engineering for years.

By covering all statements adequately the chances of a false positive (a bad design tests good) are reduced.

Never 100% certain that design under verification is indeed correct. Code coverage increases confidence.

Some tools may use file I/O aspect of language and others have special features built into the simulator to report coverage statistics.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.103103

Adding Code Coverage

If built into simulator - code is automatically instrumented.

If not built in - must add code to testbench to do the checking

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.104104

Code Coverage

Objective is to determine if you have overlooked exercising some code in the model» If you answer yes then must also ask why the code is

present Coverage metrics can be generated after

running a testbench Metrics measure coverage of

» statements» possible paths through code» expressions

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.105105

Report Metrics for Code Coverage

Statement (block):» Measures which lines (statements have been

executed) by the verification suite

Path:» Measures all possible ways to execute a

sequence of instructions

Expression Coverage:» Measures the various ways paths through the

code are executed

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.106106

Statements and Blocks

Statement coverage can also be called block coverage The Model Sim simulator can show how many times a

statement was executed Also need to insure that executed statements are simulated with

different values And there is code that was not meant to be simulated (code

specifically for synthesis for example)

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.107107

Path Coverage

Measures all possible ways you can execute a sequence of statements

Example has four possible paths

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.108108

Path Coverage Goal

Desire is to take all possible paths through code

It is possible to have 100% statement coverage but less than 100% path coverage

Number of possible paths can be very, very large => keep number of paths as small as possible

Obtaining 100% path coverage for a model of even moderate complexity is very difficult

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.109109

Expression Coverage

A measure of the various ways paths through code are taken

Example has 100% statement coverage but only 50% expression coverage

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.110110

100% Code Coverage

What do 100% path and 100% expression coverage mean?» Not much!! Just indicates how thoroughly verification

suite exercises code. Does not indicate the quality of the verification suite.

» Does not provide an indication about correctness of code

Results from coverage can help identify corner cases not exercised

Is an additional indicator for completeness of job» Code coverage value can indicate if job is not

complete

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.111111

Coverage is based on the functionality of the design

Coverage models are specific to a given design

Models coverThe inputs and the outputsInternal statesScenariosParallel propertiesBug Models

Functional Coverage

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.112112

The Model:We want to test all dependency types of a resource (register) relating to

all instructions

The attributes I - Instruction: add, add., sub, sub.,... R - Register (resource): G1, G2,... DT - Dependency Type: WW, WR, RW, RR and None

The coverage tasks semantics A coverage instance is a quadruplet <Ij, Ik, Rl, DT>, where

Instruction Ik follows Instruction Ij, and both share Resource Rl with Dependency Type DT.

Interdependency-Architectural Level

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.113113

Additional semanticsThe distance between the instructions is no more

than 5 Restrictions

Not all combinations are valid Fixed point instructions cannot share FP

registers

Interdependency-Architectural Level (2)

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.114114

Size and grouping:

Original size: ~400 x 400 x 100 x 5

Let the Instructions be divided into disjoint groups I1 ... In

Let the Resources be divided into disjoint groups R1 ... Rk

After grouping: ~60 x 60 x 10 x 5 = 180000

Interdependency-Architectural Level (3)

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.115115

Defining the domains of coverage Where do we want to measure coverage What attributes (variables) to put in the trace

Defining models Defining tuples and semantic on the tuples Restrictions on legal tasks

Collecting data Inserting traces to the database Processing the traces to measure coverage

Coverage analysis and feedback Monitoring progress and detecting holes Refining the coverage models Generating regression suites

The Coverage Process

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.116116

Look for the most complex, error prone part of the application Create the coverage models at high level design

Improve the understanding of the design Automate some of the test plan

Create the coverage model hierarchically Start with small simple models Combine the models to create larger models.

Before you measure coverage check that your rules are correct on some sample tests.

Use the database to "fish" for hard to create conditions.Try to generalize as much as possible from the data:

– X was never 3 is much more useful than the task (3,5,1,2,2,2,4,5) was never covered.

Coverage Model Hints

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.117117

Future Coverage Usage One area of research is automated coverage directed feedbackIf testcases/drivers can be automatically

tuned to go after more diverse scenarios based on knowledge about what has been covered, then bugs can be encountered much sooner in design cycle

Difficulty lies in the expert system knowing how to alter the inputs to raise the level of coverage.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.118118

Verification Languages

Specific to verification principles Deficiencies in RTL languages (Verilog and VHDL)

» Verilog was designed with a focus on describing low-level hardware structures

– No support for data structures (records, linked lists, etc)– Not object oriented

» VHDL was designed for large design teams– Encapsulates all information and communicates strictly

through well-defined interfaces

These limitations get in the way of efficient implementation of a verification strategy

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.119119

Verification Languages (cont)

Some examples of verification languages» Verisity’s Specman Elite» Synopsys’ Vera» Chronology’s Rave» System C

Problem is that these are all proprietary, therefore buying into one will lock one into a vendor.

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.120120

Verification Languages

Even with a verification language still» need to plan verification» design verification strategy and design verification

architecture» create stimulus» determine expected response» compare actual response versus expected response

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.121121

Revision Control

Need to insure that model verified is model used for implementation

Managing a HDL-based hardware project is similar to managing a software project

Require a source control management system

Such systems keep last version of a file and a history of previous versions along with what changes are present in each version

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.122122

Configuration Management

Wish to tag (identify) certain versions of a file so multiple users can keep working

Different users have different views of project

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.123123

File Tags

Each file tag has a specific meaning

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.124124

Issue Tracking

It is normal and expected to find functional irregularities in complex systems

Worry if you don’t!!! Bugs will be found!!! An issue is anything that can affect the functionality

of the design» Bugs during execution of the testbench» Ambiguities or incompleteness of specifications» A new and relevant testcase» Errors found at any stage

Must track all issues if a bad design could be manufactured were the issue not tracked

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.125125

Issue Tracking Systems

The Grapevine» Casual conversation between members of a design team in

which issues are discussed» No-one has clear responsibility for solution» System does not maintain a history

The Post-it System» The yellow stickies are used to post issues» Ownership of issues is tenuous at best» No ability to prioritize issues» System does not maintain a history

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.126126

Issue Tracking Systems (cont.)

The Procedural System» Issues are formally reported» Outstanding issues are reviewed and resolved during team

meetings» This system consumes a lot of meeting time

Computerized Systems» Issues seen through to resolution» Can send periodic reminders until resolved» History of action(s) to resolve is archived» Problem is that these systems can require a significant effort

to use

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.127127

Code Related Metrics

Code Coverage Metrics - how thoroughly does verification suite exercise code

Number of Lines of Code Needed for Verification Suite - a measure of the level of effort needed

Ratio of Lines of Verification Code to Lines of Code in the Model - measure of design complexity

Number of source code changes over time

© Vijay, PSU, 2000CSE497A Lecture 2.CSE497A Lecture 2.128128

Quality Related Metrics

Quality is subjective Examples of quality metrics

» Number of known outstanding issues» Number of bugs found during service life

Must be very careful to interpret and use any metric correctly!!!