7
VERIFICATION PLANNING TO FUNCTIONAL CLOSURE OF PROCESSOR-BASED SoCs ANDREW PIZIALI, CADENCE DESIGN SYSTEMS Functional verification consumes more than 70% of the labor invested in today’s SoC designs. Yet, even with such a large investment in verification, there’s more risk of functional failure at tapeout than ever before. The primary reason is that the design team does not know where they are, in terms of functional correctness, relative to the tapeout goal. They lack a functional verification map for reference that employs coverage as its primary element. Coverage, in the broadest sense, is responsible for measuring verification progress across a plethora of metrics and for helping engineers assess their location relative to design completion.[1] The map to be referenced must be created by the design team upfront, so they know not only where they are starting from-specification but no implementation- but also where they are going: fully functional first silicon. The metrics of the map must be chosen for their utility: RTL written, software written, features, properties, assertion count, simulation count, failure rate, and coverage closure rate. The map is the verification plan, an executable natural language document [2],[3] that defines the scope of the verification problem and its solution. The scope of the problem is defined by implicit and explicit coverage models.[1] The solution to the verification problem is described by the methodology employed to achieve full coverage: dynamic and static verification. Simulation (dynamic) contributes to coverage closure through RTL execution. Formal analysis (static) contributes to coverage closure through proven properties. By annotating the verification plan with these (and other) progress metrics, it becomes a live, executable document that directs the design team to their goal. Most verification planning today lacks the rigor required to recognize the full scope of the verification problem faced by the design team. The reason for this is that substantial effort is required to write a thorough verification plan. If that plan is obsolete as soon as it is written, the effort is not justified. However, by transforming the verification plan into an active specification that controls the verification process, the planning effort is more than justified. This article illustrates the application of an executable verification plan to a processor-based SoC. THE SoC A modern SoC is composed of one or more processors, embedded software, instruction and data caches, large register sets—configuration registers, scratchpad, and architectural registers—multiple buses, dedicated hardware accelerators, and a dozen or more interfaces to industry-standard external interfaces. Typically, a subset of these components is reused from previous designs or commercial IP vendors. The remainder is built from scratch for the new design. The verification plan must address the reuse requirements of design and verification IP as well as describe the verification processes for the new components. Figure 1 below is a block diagram of the Eagle SoC used for illustration in this article. Under control of embedded code running on the RISC processor, an encrypted image is read from MII interfaces, INCISIVE VERIFICATION ARTICLE MAY 2006

verification_planning.pdf

Embed Size (px)

DESCRIPTION

verification_planning,verification,planning,vplan

Citation preview

  • VERIFICATION PLANNING TO FUNCTIONALCLOSURE OF PROCESSOR-BASED SoCs

    ANDREW PIZIALI, CADENCE DESIGN SYSTEMS

    Functional verification consumes more than 70% ofthe labor invested in todays SoC designs. Yet, evenwith such a large investment in verification, theresmore risk of functional failure at tapeout than everbefore. The primary reason is that the design teamdoes not know where they are, in terms of functionalcorrectness, relative to the tapeout goal. They lack afunctional verification map for reference thatemploys coverage as its primary element.

    Coverage, in the broadest sense, is responsible formeasuring verification progress across a plethora of metrics and for helping engineers assess theirlocation relative to design completion.[1] The map to be referenced must be created by the design teamupfront, so they know not only where they arestarting from-specification but no implementation-but also where they are going: fully functional firstsilicon. The metrics of the map must be chosen fortheir utility: RTL written, software written, features,properties, assertion count, simulation count, failurerate, and coverage closure rate.

    The map is the verification plan, an executablenatural language document [2],[3] that defines thescope of the verification problem and its solution.The scope of the problem is defined by implicit andexplicit coverage models.[1] The solution to theverification problem is described by the methodologyemployed to achieve full coverage: dynamic andstatic verification. Simulation (dynamic) contributes to coverage closure through RTL execution. Formalanalysis (static) contributes to coverage closurethrough proven properties. By annotating theverification plan with these (and other) progress

    metrics, it becomes a live, executable document thatdirects the design team to their goal.

    Most verification planning today lacks the rigorrequired to recognize the full scope of the verificationproblem faced by the design team. The reason for this is that substantial effort is required to write athorough verification plan. If that plan is obsolete as soon as it is written, the effort is not justified.However, by transforming the verification plan into an active specification that controls the verificationprocess, the planning effort is more than justified.

    This article illustrates the application of an executableverification plan to a processor-based SoC.

    THE SoCA modern SoC is composed of one or more processors,embedded software, instruction and data caches,large register setsconfiguration registers,scratchpad, and architectural registersmultiple buses,dedicated hardware accelerators, and a dozen or moreinterfaces to industry-standard external interfaces.Typically, a subset of these components is reused from previous designs or commercial IP vendors. The remainder is built from scratch for the newdesign. The verification plan must address the reuserequirements of design and verification IP as well as describe the verification processes for thenew components.

    Figure 1 below is a block diagram of the Eagle SoCused for illustration in this article. Under control ofembedded code running on the RISC processor, anencrypted image is read from MII interfaces,

    INCISIVE VERIFICATION ARTICLEMAY 2006

  • Verification planning to functional closure of processor-based SoCs Incisive Newsletter May 2006 | 2

    decrypted by the DES engine, rendered by the DSP,and written to the VGA interface for display. The RISCprocessor, DSP, MACs, USB, and hard disk drive (HDD)blocks are all off-the-shelf, pre-verified components.The DES engine and LCD/VGA blocks are new, as wellas the embedded RISC code.

    Figure 1: Eagle block diagram

    VERIFICATION PLANNINGVerification planning is the process of analyzing thedesign specification with the goal of quantifying thescope of the verification problem and specifying itssolution. The verification problem is quantified withcoverage models while the solution is described as afunctional specification for the verificationenvironment. The problem and the solution areaddressed in the subsequent sections. But how should the verification plan be organized?

    Functional Requirements contains the interface and core features derived from the design functional specification. Design Requirementscontains the interface and core features derived from the design micro-architecture specification,sometimes referred to as a Design Specification.Verification Views groups references to other partsof the verification plan by function or milestone, asdiscussed later. Finally, Verification EnvironmentDesign is the functional specification for theverification environment.

    SPECIFICATION ANALYSISThe functional specification of a design is supposedto capture its requirements and intended behavior.Since functional verification demonstrates that theintent of a design is preserved in its implementation,we need a source of design intent. Ideally thespecification is a formal requirements document, but in practice it may be augmented by additionalmaterial such as a marketing requirements documentsor informal engineering notes. Furthermore, thedesign specification is often partitioned into anarchitectural specification and a design specification.

    The architectural specification restricts its descriptionto black-box functional requirements, whereas thedesign specification addresses implementation detailssuch as pipeline depths and internal bus widths.

    Our objective in specification analysis is to extract the required features of the design. Two approachesare available to us, depending on the size and kind of specification: top-down analysis and bottom-up analysis.

    TOP-DOWN ANALYSIS

    Top-down specification analysis is aimed at addressingthe problem of distilling the requirements capturedin a large specification20 pages or moreinto amanageable verification goal. The term top downrefers to analysis that proceeds from a higherabstraction level to a lower level. This abstraction gap is bridged through interactive discussion in abrainstorming session. The architect, designer,verification engineer, and manager assemble toidentify and structure the features of the design.Typically, the architect draws a block diagram of thedesign on the whiteboard to serve as a discussionvehicle. Figure 2 below uses a block diagram of theDES engine as an example.

    Figure 2: DES engine block diagram

    The DES engine receives data through the input FIFOconsisting of encryption keys and cipher text. Sixteencycles after the last datum is written into the FIFO, theclear text is written to the output FIFO. The input andoutput FIFOs are configured as 1Kx32. The data controlblock provides an interface to the AMBA bus for dataand control flow for the encrypt/decrypt block.

    One of the identified features of the DES engine isflow control management between the AHB interfaceand the data control block by the input FIFO. Werecord this feature in the emerging verification plan,AHB-to-data control block flow control, and writea semantic description of the feature. The semanticdescription concisely captures the purpose of thefunctional coverage model that will be designed torecord the extent that the feature is exercised. Thesemantic description of this feature might read, Theinput FIFO manages flow control between the AHBinterface and the data control block to accommodatedata transfer rate differences between the twoblocks. There are 1,024 32-bit entries in the FIFO.

    Input FIFO

    (1K x32) Data

    control block

    AHBI/F

    Encrypt/decrypt block

    Output FIFO

    (1K x32)

    AMBA bus (AHB)

    RISC DSP DES

    FIFO memory64KB

    FIFO memory64KB

    DMA

    MAC 2MAC 1

    FIFO memory8KB

    FIFO memory8KB

    DP memory4KB

    USB

    LCD/VGA HDDI/F

    AHB/WhishboneWrapper

    AHB/WhishboneWrapper

    AHB/WhishboneWrapper

    AMBA

    UART

    UART

  • Verification planning to functional closure of processor-based SoCs Incisive Newsletter May 2006 | 3

    BOTTOM-UP ANALYSIS

    In contrast to top-down analysis, bottom-upspecification analysis is suitable for moderate sizedspecifications, 20 pages or less. We walk through the specification section by section, paragraph byparagraph, and sentence by sentence to identifyfeatures and their associated attributes andbehaviors. Referencing the DES block diagramdescription in the previous section as part of itsspecification, the DES engine receives data throughthe input FIFO consisting of encryption keys andcipher text. Sixteen cycles after the last data iswritten into the FIFO, the clear text is written to the output FIFO. The input and output FIFOs areconfigured as 1Kx32. The data control block providesan interface to the AMBA bus for data and controlflow for the encrypt/decrypt block.

    We note that the first sentence describes a feature:input data buffering. That sentence mentions twoattributes (encryption key and cipher text) and onebehavior (data reception). However, since the databuffering is implemented using a FIFO, the FIFOdepth and sequential ordering of read and writeoperations are also implied attributes. This featureand its attributes are used as the starting point fordesigning its associated coverage model, as with top-down analysis.

    COVERAGE MODEL DESIGNOnce the design features have been extracted fromthe specification through top-down or bottom-upanalysis, the next step in quantifying the scope of theverification problem is designing associated coveragemodels for each feature. That process is summarizedin the following subsections as part of the planning-to-closure flow.

    TOP-LEVEL DESIGN

    Coverage model top-level design consists of writingthe semantic description of the model, selecting itsattributes, and choosing a model structure. Thesemantic description of the model falls out of thespecification analysis procedure described earlier. If bottom-up analysis were employed, the coveragemodel attributes may also have been selected. For the DES engine feature exampleflow controlmanagementthe associated attributes are FIFOdepth, AHB interface write, and data control blockread. The coverage model structure reflects therelationship among the attributes and may be a matrix, hierarchical or hybrid. A matrix model,wherein each attribute defines a dimension of acoverage space, is suitable for this simple model.

    The top-level design of the model is added to theverification plan in the Verification EnvironmentDesign, Coverage section as a table: The left columncontains the feature name. The second column

    contains the attribute names or sampling times. Thevalues column lists the observed attribute values tobe recorded. The last column lists the name of theverification environment monitor responsible forobserving the attribute. For example, the DES inputFIFO monitor is responsible for counting the numberof times the values TRUE and FALSE have beenobserved on the AHB write signal.

    Each row of the table is either associated with anattribute (or set of attributes) or a sampling time. Forexample, the first row of this table indicates that allattribute values are to be sampled on the FIFO clock.The second, third, and fourth rows are associatedwith each attribute while the last row defines thecomplete coverage model. This model is composed of the attributes FIFO depth, AHB write, and datacontrol block read, organized as a matrix model andindicated by the notation C{}.

    DETAILED DESIGN

    Once the top-level design of a coverage modelis complete, it must either be mapped into theverification environment for simulation or to one ormore properties for formal analysis. If we choose toachieve the goal of this model using simulation, thedetailed design of the model answers the followingthree questions:

    1. What must be sampled for the attribute values?

    2. Where in the verification environment should we sample?

    3. When should the data be sampled and correlated?

    The what in the first question refers to whatelement in the design-under-verification (DUV) or inthe design (RTL or software) is to be sampled for eachattribute. For example, we might sample the signalfifo_current_ptr for the attribute FIFO depth.

    The where in the second question means thelocation in the object or module hierarchy toinstantiate the coverage model. For example, in aneRM-compliant [5] e language environment [7], thecoverage group that implements the coverage modelwould be located in an agent.

    The when in the third question refers to the timethat data associated with an attribute should besampled or correlated. The attributes of our exampleare to be sampled on each FIFO clock edge, whichmay be carried by the signal fifo_clock.

    Feature Attributes or sampling times

    Values Monitor

    @fifo_clock

    FIFO depth 0, 1, 2..1022, 1023, 1024

    DES input FIFO

    AHB write FA LSE, TRU E DES input FIFO

    Data control block read

    FA LSE, TRU E DES input FIFO

    Flow control management

    FIFO depth, AHB write, data control block read

    C{}

  • Verification planning to functional closure of processor-based SoCs Incisive Newsletter May 2006 | 4

    The correlation time is the time the most recentlysampled values of the attributes of a model shouldbe recorded. For example, if each attribute is sampledon its own clock, the current attribute values may becaptured as a set at yet another interval. For the flowcontrol management coverage model, the samplingand correlation times of the attributes are one andthe same. The detailed design decisions are alsorecorded in the Verification Environment Designsection of the verification plan.

    IMPLEMENTATION

    The third step in building a coverage model isimplementation. The model may be implemented in a high-level verification language (HVL) such as e orSystemVerilog; in a property specification languagesuch as SystemVerilog Assertions (SVA) or PSL; in anRTL language like VHDL or SystemVerilog; or in aconventional programming language such as C or C++.Of course, the implementation is much easier in anHVL than in other languages. In e, theimplementation of the flow control managementmodel would look like this:

    cover flow_ctrl_mgmt is {item FIFO_depth : uint = fifo_current_ptr$ using ranges = {

    range([0]); range([1]); range([2..1022]); range([1023]); range([1024])

    };item AHB_write : bool;item DCB_read : bool;cross FIFO_depth, AHB_write, DCB_read

    };

    event flow_ctrl_mgmt is rise(fifo_clock$);

    DYNAMIC VERIFICATIONThe answer to the second part of the questionanswered by a verification planwhat the solution to the verification problem isfalls into one of twocategories: dynamic verification or static verification.Dynamic verification (also known as simulation), inaddition to coverage measurement, requires applyingstimulus to the DUV and checking its response to theapplied stimulus. The most efficient means of rapidlyachieving the coverage goals defined in the previoussection is through coverage-driven verification (CDV).[1]

    CDV employs constrained random stimulus generationto produce stimulus that is functionally valid, yetalso likely to activate corner cases of the design.

    An optimal CDV environment is characterized anautonomous, meaning the environment requiresno external direction, such a test, to steer it towardgenerating high value stimulus. The constraints thatmight be distributed among a set of tests are insteadbuilt into the verification environment itself, allowingsymmetrical simulations to be distributed across asimulation farm until functional coverage closureis reached.

    A complementary stimulus source, useful for bringingup a design in the beginning, is directed tests.Directed tests that employ a constrained randomenvironment are simply a set of tight generationconstraints layered on top of the base CDVenvironment. Hence, they are usually quite short andeasy to write. The stimulus aspect of the verificationenvironment is designed in the VerificationEnvironment Design, Stimulus section of theverification plan.

    The checking aspect of a dynamic verificationenvironment must ensure that the response of theDUV to applied stimuli is correct in both the data andtemporal domains. The values driven out of the DUVmust adhere to the specification, as well as itssequential behavior. Checking is often implementedusing a reference model, scoreboard, or distributedapproach such as assertions.[6] The checkingapproach for each feature is specified in theVerification Environment Design, Checkerssection of the verification plan.

    STATIC VERIFICATIONStatic verification, also known as formal analysis, is asecond solution to the verification problem. Itencompasses model checking and theorem provingamong a variety of other techniques. Distinguishedfrom its dynamic verification counterpart, staticverification requires no stimuli. Instead, it demandsformally written design requirements: properties ortheorems. The properties are specified and designedin the Verification Environment Design, Checkerssection of the verification plan. They may beimplemented in one of the industry-standard propertyspecification languages such as SVA, PSL, or OVL.

    VERIFICATION PLAN AUTOMATIONOnce the verification plan has been written, it may be referenced as a specification for theverification process and updated as verificationproceeds. However, unless the plan actually controls the verification process (not unlike a C source file controlling application behavior),it will tend to become yet another element of project documentation, obsolete almost as soon as it is written.

    The next section describes how to turn a verificationplan into an executable specification that remainsboth a natural language document and becomesmachine readable. This is followed by linking theverification plan to the verification environment andusing it to control and monitor verification progress.The Cadence Incisive Manager [8] will illustrate theuse of an executable verification plan.

  • Verification planning to functional closure of processor-based SoCs Incisive Newsletter May 2006 | 5

    VERIFICATION PLAN REQUIREMENTS

    In order for a verification plan to serve the needs of project team members and also control theverification process, it must be a natural languagedocument but also machine readable. A naturallanguage document (spoken or written by humans) isrequired because people conceive ideas and exchangethem with one another in their native tongues.[2],[3]This allows the document writer to modulate boththe abstraction level and ambiguity in the verificationplan, trading off precision for implementationfreedom. The verification plan must also be machinereadable so that it may be linked to the verificationenvironment and progress metrics recorded duringverification runs (simulation or proofs).

    VERIFICATION PLAN TO VERIFICATIONENVIRONMENT LINKAGE

    The verification plan serves as an annotated map ofverification progress, showing users at all times theirlocation on the road to functional closure. The planalso serves as the functional specification for theverification environment, in particular that of thecoverage aspect. As the coverage aspect isimplemented with functional coverage models (codecoverage and assertion coverage), each coveragesection of the verification plan must be associatedwith its implementation. This association allowscurrent coverage data to be displayed in the contextof the verification plan.

    We support two types of associations: forwardannotation and backward annotation. Forwardannotation is used to associate each coverage sectionof the verification plan with its implementation. Since the verification plan serves as the functionalspecification for the coverage aspect of theverification environment, as the verificationenvironment is designed and implemented,references to implemented coverage models areinserted in the plan. The primary use of forwardannotation is for a new verification environmentimplemented from a verification plan. An example of a forward annotation in the verification plan iscover group: flow_ctrl_mgmt. Cover group is thetext associated with a style and flow_ctrl_mgmt isthe name of an e coverage group.

    Backward annotation is used to associate a coveragemodel implementation to a section of a verificationplan. The linkage is the same as that of forwardannotation, but the direction is reversed: code isadded to the coverage model of a legacy verificationenvironment to associate the model with a coveragesection of a new verification plan. This is used whenlegacy verification IP is used to implement part of anew verification plan. This association is easilyaccomplished by adding a verification plan aspect to a legacy e environment, leaving the originalenvironment unchanged. However, annotation code

    could also be added to a verification environmentimplemented in another language to reference averification plan section. Below is an example ofbackward annotation implemented in e:

    cover flow_ctrl_mgmt using vplan_ref = DesignCores/Data Control Block is {

    item FIFO_depth : uint = fifo_current_ptr$ using ranges = {

    range([0]); range([1]); range([2..1022]); range([1023]); range([1024])};item AHB_write : bool;item DCB_read : bool;cross FIFO_depth, AHB_write, DCB_read

    };

    This is the same coverage group introduced earlier, with addition of the vplan_ref option.vplan_ref associates this coverage group with theDesign Cores, Data Control Block section of theverification plan.

    VERIFICATION PLAN VIEW

    In addition to the feature sections of the verificationplan that define the functional requirements of thedesign, the Verification View section of the planprovides a set of tailored views into the verificationprogress. The verification views may be orientedtoward cross-module functional areas, such as errordetection and recovery, or time-based projectcheckpoints with specified target goals associatedwith each. The latter are milestones that are naturallydefined for any design project. Each view may beconsidered a concern that references one or morerelevant sections of the verification plan, not unlikean aspect in an aspect-oriented programminglanguage. It is also customized with view-specificcoverage goals and deadlines.

    SESSION SPECIFICATION

    A second input into Incisive Manager, in addition toan executable verification plan, is a file that describesa session. A session is a set of dynamic or staticverification runs. The session input format (vsif) filedefines the tests, properties, and other attributes of a set of simulations or formal proofs that aim toachieve a subset of the coverage goals, such as anovernight or weekend regression run. When a session completes, a session output format (vsof) file captures the results of the session: runs passed and failed, log files, trace files, and other informationrequired for debugging.

    FUNCTIONAL CLOSUREWith the verification plan written and annotated, theverification environment developed, and propertieswritten for those features to be verified using modelchecking, its time to put the plan to use. What do we

  • Verification planning to functional closure of processor-based SoCs Incisive Newsletter May 2006 | 6

    mean by functional closure? Simply stated,functional closure means achieving the functionalverification goals specified in the verification plan.The goals are defined using the chosen metrics:coverage, simulation failures, property proofs, andDUV and verification environment code. As describedearlier, the goals are typically grouped byfunctionality or timeframe in the verification viewsection of the verification plan.

    We use the verification plan viewalso referred to as the vPlan viewin Incisive Manager to analyzeverification progress. The vPlan view displays anoutline of the verification plan with a percentagenext to each section name. This percentagerepresents the fraction of the coverage goal of eachsection achieved by the loaded vsof files. Two typesof analysis are required to understand the next stepsrequired to reach functional closure: failure analysisand coverage analysis.

    FAILURE ANALYSIS

    Failure analysis is the process of reviewing failedsimulation runs in an attempt to correlate failureswith other run parameters. A failure means that oneor more verification environment checkers detectedand reported functional violations. For example, avsif file may have specified 500 simulation runs for aparticular session, and 75 of those runs failed. Howmany distinct errors were reported? Which runprovoked a particular error in the shortest number ofsimulation cycles? Are specific errors always detectedwith other errors? These questions must be answeredin order to quickly diagnose and repair each unique failure.

    Incisive Manager facilitates failure analysis throughthe first failures view in its runs window (seeFigure 3 below). Filtering, selecting, and sortingoperations are used to rapidly zero-in on commonfailure modes.

    COVERAGE ANALYSIS

    As functional, code, and assertion coverage populatethe coverage models defined by the verification plan,coverage holes (such as missing coverage) becomeapparent. Coverage analysis [1],[9],[10],[11],[12] isrequired to determine why these holes exist and, forthose that are valid, how to fill them. For example, afunctional coverage hole may be due to a faultycoverage model that demands to observe impossibleor illegal behavior. However, it may also be due tomissing stimulus required to activate the behavior ormissing hardware or software logic required toimplement it. Techniques such as coverage holeaggregation, projection, and selection are useful foridentifying the commonalities shared by regions ofmissing coverage.

    Figure 3: Incisive Manager first failures view

    As the reason for each coverage hole is discovered,the appropriate response-coverage model correction,enhanced stimulus generation, DUV fixes, moresimulation-is taken. With each iteration through theclosure cycle (plan, implement, simulate/prove,measure, analyze) we refine the coverage goals andadjust the verification environment until we achieveour defined goals.

    CONCLUSIONSThe verification planning-to-coverage-closure processdiscussed in this paper has been broadly adopted byCadence (and former Verisity) customers since 2003with spectacular results. The number of first-pass,fully functional silicon designs has increased eventhough design complexity has continued to rise. Our customers have embraced rigorous upfrontverification planning, with its attendant costs,because the resulting verification plan is transformedfrom documentation-an after-the-fact, obsoleterecord of the past-to a control specification thatdrives their verification process and reflects the stateof the process throughout the design cycle.

    The plan-to-closure steps of planning, planautomation, and functional closure comprise aprocess that is repeatable, predictable, and reusablefor both ground-up and derivative designs. Thisarticle illustrated each of the steps in detail withexamples drawn from a representative SoC designand one of its functional blocks.

    Where do we go from here? The opportunities forverification process automation at the front-end ofthe design process are vast. Beyond imbuing theverification plan with reuse semantics that mirrorthose found in verification and design IP today, theplan itself may be both mechanically linked to itsparent specifications and derived from using an

  • Verification planning to functional closure of processor-based SoCs Incisive Newsletter May 2006 | 7

    automated process. Further, accurate extraction andunderstanding of design intent from specificationsmay be enhanced with the application of knowledgeengineering and expert systems.

    REFERENCES[1] Andrew Piziali, Functional Verification Coverage

    Measurement and Analysis, Springer, 2004.

    [2] Vincent E. Guiliano, Arthur D. Little, In Defenseof Natural Language, proceedings of the ACMannual conference, 1972.

    [3] Peggy Aycinena, In Defense of NaturalLanguage: A Conversation with Andrew Piziali,http://www.aycinena.com/index2/index3/archive/in%20defense%20of%20natural%20language.html (as of 11/23/05), March 30, 2005.

    [4] Oded Lachish, Eitan Marcus, Shmuel Ur, Avi Ziv, Hole Analysis for Functional Coverage Data, proceedings for the 2002 Design Automation Conference.

    [5] Cadence Design Systems, e ReuseMethodology Manual, 2002.

    [6] Janick Bergeron, Writing Testbenches:Functional Verification of HDL Models, KluwerAcademic Publishers, 2003.

    [7] The e Functional Verification LanguageWorking Group, The e Language ReferenceManual, http://www.ieee1647.org/

    [8] Cadence Design Systems, Incisive Manager,http://www.cadence.com/products/functional_ver/vmanager/index.aspx (as of 11/23/05).

    [9] Michael Kantrowitz and Lisa M. Noack, ImDone Simulating; Now What? VerificationCoverage Analysis and Correctness Checking of the DECchip 21164 Alpha Microprocessor,proceedings of the 1996 Design Automation Conference.

    [10] Sigal Asaf, Eitan Marcus, and Avi Ziv, DefiningCoverage Views to Improve FunctionalCoverage Analysis, proceedings of the 2004Design Automation Conference.

    [11] Scott Taylor, et al, Functional Verification of aMultiple-issue, Out-of-Order, Superscalar AlphaProcessor: The DEC Alpha 21264 Microprocessor,proceedings of the 1998 Design AutomationConference.

    [12] Alon Gluska, Coverage-Oriented Verificationof Banias, proceedings of the 2003 DesignAutomation Conference.

    This paper was originally presented at DesignCon2006 in Santa Clara, CA, February 2006.