Pg40 45 Mentor Heterogeneous IQ No31

Embed Size (px)

Citation preview

  • 7/29/2019 Pg40 45 Mentor Heterogeneous IQ No31

    1/6

    40

    he demand for smarter, more powerfulconsumer electronics devices is increasingthe complexity and integration of underlying

    SoC designs. This, in turn, is making it harder tobuild a comprehensive test environment. The avail-ability of the Open Verification Methodology (OVM)[1]

    has helped to at least partially ease the burden onverification engineers. Based on the IEEE 1800SystemVerilog standard and fully open, the OVMis non-vendor-specific and works with multiplelanguages and simulators. OVM provides a libraryof base classes as building blocks for creatingmodular and reusable verification environments that

    support a constrained random stimulus generationmethodology. With OVM, verification IPs (VIP) canbe developed with a well defined structure to helpmake them simple to use and re-use. Such VIPs are alreadyavailable to target common interfaces, such as AHB, AXI3 andAXI4 in the AMBA family[2].

    However, the use of constrained random stimulus generation doeshave its limitations. The coverage state space continues to growdue to the inexorable move towards a flexible, power efficient andhigh performance AMBA interconnect; multiple CPU cores, suchas the Cortex-A series[3]; increasing numbers of peripherals; andthe introduction of new and more stringent Quality-of-Service(QoS) requirements[4]. Coverage closure becomes more difficult,

    requiring multiple simulation runs with different test sequences,constraints and seeds. Simulation performance degrades expo-nentially as the complexity and number of constraints increase.Although constrained random techniques will continue to be a keypart of the verification methodology, sophisticated design teamsare gradually introducing even more advanced technologies tohelp achieve coverage closure more quickly and reliably. Two suchmethodologies are static formal verification[5]and intelligent test-bench automation[6].

    Today, companies doingARM-based SoC designs depend on VIP for block-level andsystem-level validation. Mentors Multi-View VerificationComponents (MVCs)[7] support OVM with stimulus generation,reference checking, monitoring, and functional coverage. InMarch 2010 Mentor announced that its library of Questa MVCshas been expanded to support phase one of the AMBA 4 specifi-cation, recently announced by ARM. Introduced by ARM morethan 15 years ago, the AMBA specification is the de-facto stan-dard for on-chip interconnects. Unlike other solutions, MVCs

    combine transaction-based protocol debugging and abstractionadaptation, enabling designers to connect to any level of designand testbench abstraction. For AMBA, MVCs are available tosupport the APB, AHB, AXI3 and AXI4 interfaces.

    Each MVC includes a number of OVM test components. There isan agent, interface and configuration typical of OVM verificationcomponents. Additional components range from a simpleanalysis component to log transactions to a file through to

    T

    Verifying ARM-based SoC

    Designs with Advanced OpenVerification Methodology

    By Ping Yeung, Mike Andrews, Marc Bryan, Jason Polychronopoulos, Mentor Graphics

    OVM-based Verification IPs

    Figure 1: Heterogeneous verification using constrained randomstimulus in combination with advanced methodologies

    Strategies & Methodologies

    Pg40-45-Mentor-Heterogeneous_IQ-#31 5/25/10 12:03 PM Page 1

  • 7/29/2019 Pg40 45 Mentor Heterogeneous IQ No31

    2/6

    Pg40-45-Mentor-Heterogeneous_IQ-#31 5/25/10 12:03 PM Page 2

  • 7/29/2019 Pg40 45 Mentor Heterogeneous IQ No31

    3/6

    42

    more complex analysis components, such as coverage collectorsthat ensure the complete protocol is exercised. MVCs are alsosupplied with scoreboards that can be used as is for simplememory models, or extended to incorporate more complex DUTfunctionality. With MVCs, users can build a consistent andreusable verification environment to verify that the designadheres to internal and external protocols throughout theverification process.

    Each MVC has an agent that can be configured to be in activeor passive mode. For generating stimulus the agent operates inactive mode. It instantiates the sequencer, driver, monitor, andanalysis components such as a transaction logger or coveragecollector. For system-level simulation, transactions might bedriven between two user devices, such as the processor or aDMA controller and the interconnect. In this scenario, the agentcan operate in passive mode, allowing the coverage and score-board from block level tests to be re-used.

    One way to verify a block suchas a memory controller is to build a simulation environment withthe MVCs and OVM components to perform a mixture of directedand constrained random tests. This type of environment is suit-able for verifying many types of functionality. However, once thestate space reaches a level of complexity that is moderate bytodays standards, it can become very inefficient at uncovering allcorner case behaviors. Diligence in investigating such cornercases and ensuring robust functionality is key, especially if theblock being verified is a good candidate for reuse in multiple de-signs.

    The need to discover and diagnose design flaws and to acceleratecoverage closure often leads to the use of tools for static formalverification. Mentor Graphics 0-In Formal[8] is one such tool en-abling static formal methodology to improve design quality andcomplement dynamic verification.

    Static formal verification analyzes the functionality of a block inthe context of its environment (such as operational modes andconfigurations). Initialization sequences can be incorporated aswell. It represents how the design will operate clock cycle by

    cycle and hence can determine whether various scenarios areeven possible. We recommend performing checks relating to thefollowing areas at the block level:

    Coverage closure checksMost blocks have dead code, unreachable statements andredundant logic. This is especially true for IP or reused blocks,

    which often have unneeded functionality that is a vestige ofearlier designs. If passive coverage metrics (line coverage, FSMcoverage, or expression coverage) are part of the closure criteria,then this unused functionality will have a negative impact on thecoverage grade. Formal checks can be used to identify theseunreachable statements and redundant logic so they can beexcluded from the coverage grade calculation.

    Clock domain crossing (CDC) checksCDC signals continue to be a trouble spot for functional verifica-tion, especially as these problems often do not cause simulationsto fail; instead they commonly manifest themselves as intermit-tent post-silicon failures. To ensure CDC signals will be sampledcorrectly by the receiving clock domain, they need to besynchronized before use. Static verification helps identify anyunsynchronized or incorrectly synchronized CDC signal early.

    X-propagation checksAnother class of potential post-silicon failures is related to x-gen-eration and consumption. The goal is to eliminate pessimisticx-propagation as seen in simulation and to make sure anyunknown or x-state is not generated or consumed unintentionallyin the design. When an unknown or uninitialized state is sampled,the resultant value is unpredictable, thus the importance ofensuring that registers are initialized before they are used.

    Finite state machine checks

    Finite state machines are fundamental building structures forcontrol logic. Simulation can verify the basic functionality ofan FSM. Static formal verification can catch corner casemisbehaviors such as unreachable states and transitions, andalso live/deadlock states -- all of which are difficult to verifywith simulation alone.

    Interface compliance checksInter-module communication and interface protocol complianceare infamous for causing design and verification failures.

    Block-level Verification

    Figure 3: Block-level constrained random and staticformal verification

    Figure 2: Verification IP: AXI Multi-view Verification Components

    Strategies & Methodologies

    Pg40-45-Mentor-Heterogeneous_IQ-#31 5/25/10 12:03 PM Page 3

  • 7/29/2019 Pg40 45 Mentor Heterogeneous IQ No31

    4/6

    43

    Leveraging the protocol assertion monitors in the MVCs helpsto catch problems in these areas early. Such checks generallyenable static formal verification to be performed seamlessly onthe block. Consider, for example, the use of AXI and the DDR2protocol monitors to perform static formal verification on thememory controller (shown in Figure 3).

    Resource control logicComputational resources, such as floating point units; intercon-nections, such as the bus matrix; and DMA channels andmemories are among the structures usually controlled by arbitersand complex control logic. Simulation environments tend tofocus on high-level specifications, which all too often fail toconsider concurrency of operations. This is problematic giventhat parallel processing and concurrency are common character-istics of todays devices and thus need to be verified. Staticformal verification has been used successfully to verify suchresource control logic. This technology ensures that controllogic can correctly arbitrate multiple, concurrent requests and

    transactions.

    At the subsystem or parti-tion-level, the design consists of multiple masters and slavesconnected via an AXI bus matrix. The AXI MVC may also be usedin active mode generating stimulus to replace any AXI compo-nent. As shown in figure 4, other MVCs, such as High-DefinitionMultimedia Interface (HDMI), DDR2 SDRAM, USB2.0 and GigabitEthernet, are used to provide inputs at, and validate, the externalinterfaces. Since the possible combinations of legal activity in-crease exponentially as the number of devices increase thechance of achieving full coverage with constrained random stim-ulus alone is low. Coverage closure at this level is a real chal-lenge.

    Many verification projects therefore rely on supplementing a con-strained random methodology with directed tests to handle therandom-resistant cases. Instead, an intelligent testbench automa-tion tool can be used to achieve more comprehensive coveragegoals by generating more complex verification scenarios for par-titions or subsystems of a design. An intelligent testbench, suchas Mentor Graphics inFact[9] tool, offers a more systematic ap-proach allowing such corner cases to be targeted and covereddeterministically. It allows users to maintain a single testbenchwhich can be configured to achieve specific coverage goals in thefewest simulation cycles.

    When used in conjunction with MVCs, an intelligent testbenchallows the user to define the interesting and relevant transactiontypes in a simple and compact graph or rule-based format. Figure

    5 shows a partial graph presenting the transaction parameters ofan AXI master. The algorithms in the intelligent testbench willpick a combination of transaction parameters to form a path forexecution. To achieve a certain verification goal, the user can adda coverage strategy to the graph which controls the variables thatare targeted for coverage and/or cross coverage. A particular goalmight require that multiple masters connected via an AXI busmatrix should collectively produce all interesting transactiontypes.

    Figure 4: Partition-level constrained random and intelligenttestbench verification

    Figure 5: A graph representing transaction parameters forintelligent testbench

    Partition-level Verification

    Strategies & Methodologies

    Pg40-45-Mentor-Heterogeneous_IQ-#31 5/25/10 12:03 PM Page 4

  • 7/29/2019 Pg40 45 Mentor Heterogeneous IQ No31

    5/6

    ARM processor must be verified with the hardware before thesystem on chip (SoC) product is ready to ship to the manufac-turer that will build the smart phone, table, MP3 player or otherSoC-based device. Much of the critical functionality of the SoCoccurs at the HW/SW interface. For example, the bare metal

    initialization code, power control and state change management,interrupt control, and device drivers, just to name a few, onlywork when embedded software and hardware interact correctly.Of course, it is necessary to fix as many bugs as possible in thisarea in simulation, well before the chip is fabricated. Lets look ata few ways to create a comprehensive system-level verificationenvironment using an ARM CPU.

    Early hardware/software integrationSystem-level verification can begin when the ARM processor,some embedded software, and the hardware blocks that interactwith this embedded software, are available and connected. Oncethe connections and register maps are made, the embeddedsoftware can be loaded into the program memory and the design

    can be simulated. The initial software program has to configurethe virtual memory and various devices in the system. Until thisinitialization is working properly, efforts to verify the SoC featureset are impaired.

    Real hardware stimulusAs shown in Figure 6, because the ARM processor is a busmaster, the instruction sequences executing on the embeddedARM CPU act as stimulus to the design. Memory transactions,such as memory reads caused by instruction fetches originatedby the ARM CPU, will start to happen when the ARM CPU comesout of reset, provided the reset logic is working properly. Instruc-tion fetches and memory read/write instructions executing in the

    ARM CPU cause activity in the bus matrix and connected busslaves. The same embedded code running on the ARM CPUcan be used in simulation, emulation, hardware prototypes,and the finished SoC.

    System-level debug challengeAmong the greatest challenges of verifying the hardwareusing the embedded software is figuring out what happenedwhen things go wrong and verifying, when things work, thatthey worked as expected. Without proper visibility into theexecution of the processor and other hardware, diagnosinga problem can be very difficult. The verification engineermust concentrate on very small details of the processorexecution behavior, such as which processor register

    contains the result of a particular memory read instruction.The verification engineer must also track all of these detailsjust to figure out what was happening in the processor at themoment of the problem or even many instructions before theproblem. Logic waveforms are not an effective means to

    show the state of the processor. The detailed processor executionbehavior has been automated by Mentor Graphics QuestaCodelink[10] tool so the verification engineer can see thebehavior of the processor instructions together with the logicwaveforms.

    44

    This is simple to achieve, as the algorithms in the tool candistribute the transaction types to multiple AXI MVCs acting asmasters. During simulation runtime, they will all contribute tothe same verification goal.

    An intelligent testbench allows the specification of application-specific stimulus to control individual interfaces, or, to controlthe synchronization of activity on two or more interfaces at once.This allows for a much more comprehensive verification of theinterrelation of the various types of subsystem interfaces. Ahigher level graph can be created that defines and helps toprioritize the interesting combinations. For the design in Figure 4,a graph would be created for each interface type (AXI, DDR2,HDMI, USB, Ethernet), and a further high-level graph would beresponsible for coordinating activity across two or more of theinterfaces to produce higher level verification scenarios to meetverification goals. Depending on the selected coverage strategy,the same testbench could target coverage of the high-level sce-narios, the individual protocols, or the combination of both. Once

    specific coverage goals are achieved, the testbench automaticallyreverts to generation of random transactions for as long as thesimulation is allowed to run.

    An example of a high level scenario that might be captured in agraph is a stress test where combinations of transactions aregenerated on each interface simultaneously to cause the highestpossible resource utilization in the system. Another example,from a design team at one of our customers working on amultiple-CPU design, is using the graph to ensure that allcombinations of simultaneous memory accesses from twodifferent CPUs are attempted. This was done to uncover issueswhen multiple CPUs are accessing the cache at the same time.

    Thorough block- and parti-tion-level verification is a necessary but often insufficient part ofthe effort to full vet and debug a design prior to tapeout. This isbecause at the system-level, software/firmware that runs on an

    System-level Verification

    Figure 6: System-level hardware and software co-simulationand debug

    Strategies & Methodologies

    Pg40-45-Mentor-Heterogeneous_IQ-#31 5/25/10 12:03 PM Page 5

  • 7/29/2019 Pg40 45 Mentor Heterogeneous IQ No31

    6/6

    45

    Lots of softwareLater in project design cycles when the SoC is complete fromthe hardware logic perspective, there often is much additionalrelatively untested software ready to run on the SoC. A hardwareabstraction layer can help in this task by isolating the largevolume of software from the hardware. For example, the projectspecification may indicate that the SoC requires a Unified Extensi-ble Firmware Interface (UEFI) in order to be compatible with astandard UEFI-compliant operating system. A robust hardwareabstraction layer can make it easier on those engineers workingon system middleware and other applications closely tied to thesoftware-hardware interface. Verifying the hardware-dependentsoftware requires sufficient speed for software execution, a highdegree of visibility and control, and a short turnaround time forfixing defects. Codelink offers a variety of means to acceleratesoftware execution, including executing printf and pre-verifiedmemory read/write operations in zero simulation time. TheseCodelink capabilities provide the tools needed to quickly verifythe hardware abstraction layer.

    Starting with OVM, in this article we haveattempted to describe a few advanced verification technologiesthat expand the current methodology of using directed andconstrained-random stimulus generation in simulation. Wediscussed use of OVM and available verification IPs (fromMentors Questa MVCs[7]) to build up a complete and reusableverification environment for simulation. We then introducedadditional advanced technologies including static formal verifica-tion for the block-level (Mentor's 0-in Formal Verification tool[8]),intelligent testbench automation for the sub-system orpartition level (Mentor's inFact tool[9]) and finally hardware/software debugging for the system level (Mentor's Codelink tool[10]). Each of these tools enables project teams to improve thetime to verification closure, and as a result, deliver robustdesigns to meet market windows.

    [1] Open Verification Methodology, www.ovmworld.org[2] AMBA Open Specifications, www.arm.com/products/system-ip/amba/amba-open-specifications.php[3] Cortex-A Series, www.arm.com/products/processors/cortex-a[4] Traffic Management for Optimizing Media-Intensive SoCs, www.iqmagazineon-line.com/archive28/pdf/Pg32-37.pdf[5] Static verification -- whats old is new again,www.scdsource.com/article.php?id=382[6] Intelligent testbench automation boosts verification productivity, www.scd-source.com/article.php?id=129[7] Questa MVC, www.mentor.com/products/fv/questa-mvc[8] 0-In Formal Verification, www.mentor.com/products/fv/0-in_fv[9] inFact, www.mentor.com/products/fv/infact

    [10] Questa Codelink, www.mentor.com/products/fv/codelink

    Summary

    END

    References

    Strategies & Methodologies

    The latest ARM designnews, features, products,

    webcasts and more areavailable 24/7 on the new

    IQMagazineonline!

    Bookmark it today,and visit often!www.iqmagazineonline.com

    Pg40-45-Mentor-Heterogeneous_IQ-#31 5/25/10 12:03 PM Page 6