+ All Categories
Home > Documents > Incisive Dec04

Incisive Dec04

Date post: 08-Aug-2018
Category:
Upload: avishek2by
View: 225 times
Download: 0 times
Share this document with a friend

of 21

Transcript
  • 8/22/2019 Incisive Dec04

    1/21

    INCISIVE VERIFICATION NEWSLETTERDECEMBER 2004

    IN THIS ISSUE:

    01 TO OUR READERS | PAGE 1

    02 ADVANCED VERIFICATION FLOW | PAGE 2

    03 PROCESSOR-BASED EMULATION | PAGE 10

    04 THE INCISIVE ASSERTION LIBRARY | PAGE 14

    05 INCISIVE DEBUG AND ANALYSIS TIPS | PAGE 17

    ncisive platform news and events................... 19

  • 8/22/2019 Incisive Dec04

    2/21

    TO OUR READERS

    01

    Hello and welcome to the December 2004 issue of the Cadence Incisive Newsletter,

    the quarterly technical verification newsletter for design and verification engineers. The

    Incisive Newslettercontains articles and information to help you increase the speed and

    efficiency of your verification efforts for todays complex system-on-chip (SoC) projects.

    In this issue, you will find a smaller number of articles to accommodate the more detailed

    technical information each article contains. This months newsletter features the

    Advanced Verification Flow, an implementation of the Unified Verification Methodology

    that Cadence introduced last year along with the Incisive platform. Flows are a populartopic as more tools and techniques become available to solve verification challenges.

    New tools are great, but where should they be used? When should they be used? And

    how should these tools and techniques be assembled into an optimal verification flow

    that will let you unleash the full potential of an integrated platform like the Incisive

    platform? In this article, we introduce the Advanced Verification Flow (AVF), describe

    its major stages along with their challenges and benefits, and detail a sample reference

    design that is available to help you understand and implement the AVF.

    Cadence recently raised the bar on speed and capacity with the introduction of our new

    Palladium II acceleration/emulation system. The Palladium family is a processor-based

    solution that provides a number of benefits over earlier FPGA-based systems. In our

    second article, we will examine the differences between FPGA and processor-based

    emulation and describe the advantages and disadvantages of each type of system.

    Assertion-based verification is catching on like wildfire as a way to embed expected

    design behavior checks, add to your coverage metrics, and help create more self-

    documenting designs. To help users get even more power out of assertions, Cadence

    is releasing a library of assertion modules that implement checks for common design

    structures. In our third article, we review this new library and show how it can be used

    in your designs.

    Our fourth article continues our popular series on SimVision debugging tips and techniques.

    Well show you new ways to use time ranges how to save them and synchronize them

    among multiple waveform windows to greatly expand your debug power.

    And finally, we once again give you a round up of news and events in Q4 that may be

    of interest to you. Want to see an article on a specific topic? Feel free to send in your

    requests to [email protected].

    I hope you all have a happy holiday season and enjoy any vacation plans you might have.Well see you in 2005 with our next edition of the Incisive Newsletter.

    John Willoughby,

    Incisive Marketing Director

  • 8/22/2019 Incisive Dec04

    3/21

    Todays larger chips and shorter design cycles have

    driven many companies to invest in verification teams

    and tools. Yet these verification teams struggle to

    do more with fewer resources and outdated tools.

    Yesterdays verification techniques are falling short,

    and functional verification has become a moving

    target for many designers who feel they are going to

    battle unprepared and understaffed. Companies are

    now realizing that they cant just throw engineers at

    the verification problem-they have to take advantage

    of new technologies.

    Cadence realized this long ago and stepped up to

    provide technologies that eliminate the verification

    bottleneck. The Cadence Incisive functional

    verification platform provides many key verification

    technologies, but these technologies by themselves

    are not a solution without a proper methodology.

    Effective verification must be methodology based, not

    tool based. The Cadence Methodology Engineering

    (CME) team has developed the Advanced Verification

    Flow to bring the most comprehensive verification

    strategy to design teams around the world. The CME

    team comprises former customers and verification

    consultants who are knowledgeable about effective

    verification methodology. They have created a

    reference design to demonstrate world-class, industry-

    leading methodologies. This paper introduces the

    reader to the Cadence Advanced Verification Flow

    and illustrates the reference design environment that

    the CME team created for customers to use as they

    adopt Incisive platform technologies.

    AVF OVERVIEW

    The Advanced Verification Flow (AVF) is an

    implementation of the Unified Verification

    Methodology (UVM)1,2. It outlines the development

    of both transaction-level and implementation-level

    functional virtual prototypes (FVPs), which are the

    environments used to functionally verify large system-

    on-chip (SoC) designs of one million gates and above

    (although techniques used in this flow are universal).

    The design complexity of chips at this size is such that

    a wide variety of techniques and technologies must

    be used to successfully complete the project. This

    paper walks the reader through all the AVF steps,

    from verification planing through full system

    verification. It also highlights the flows efficiency

    and how it utilizes the appropriate methods and

    tools at the appropriate time to maximize verification

    throughput. By understanding this flow, the reader will

    learn industry best practices for functional verification.

    Verification teams are not expected to implement

    and perform every major phase or all the steps in

    a phase. The AVF is designed to be incrementally

    adoptable. Teams can decide based on theirrequirements, group experience, and individual

    expertise which technology and, therefore, which

    flow steps should be adopted for their project.

    GOALS OF THE ADVANCED VERIFICATION FLOW

    Provide a roadmap for verification, from project

    conception to system verification, which you can

    adopt incrementally

    Begin verification of the design earlier, increasing

    design checkpoints and isolating bugs

    Create a verification methodology that promotes

    reuse across testbenches and projects

    Maximize the verification effectiveness of your

    environment for each cycle run

    Provide the necessary information, in the form

    of documentation and examples, to help teams

    successfully develop and utilize advanced verification

    methodologies and associated techniques

    ADVANCED VERIFICATION FLOW

    BY KARL WHITING, CADENCE DESIGN SYSTEMS

    02

    2 | Incisive Newsletter December 2004

  • 8/22/2019 Incisive Dec04

    4/21

    Explain the benefits and tradeoffs of using

    these techniques

    Show detailed examples of each step on a real design

    Direct users where to go for additional information

    BENEFITS OF USING THE ADVANCED

    VERIFICATION FLOW

    Users can understand how various advancedtechniques integrate together into a complete

    verification flow

    Users can begin adopting new verification

    technologies

    Users can select parts of the flow and adopt them

    incrementally

    Users can walk through the individual steps in the

    flow using the reference design

    Users can utilize the code, scripts, and infrastructure

    of the reference design to jump start their work

    The AVF follows the basic guidelines of the UVM.

    Figure 1 below illustrates the phases in the flow

    starting with the verification plan.

    The phases of the AVF are:

    Block-level verification Verifies individual blocks being

    developed in the SoC. Tests and components used

    during this phase can be reused at the system level.

    Integrating a block into the FVP The FVP

    environment represents the system environment.

    Integrating a design block into the FVP allows that

    block to be verified using system tests and software

    earlier in the design cycle.

    Full-chip or system verification Consists of the steps

    required to integrate the full chip and verify it.

    Transaction-based acceleration can be used in this

    phase most effectively.

    In-circuit verification Some systems require that you

    create a prototype or an in-circuit environment to

    verify more of the hardware and software together

    before tapeout. The AVF recognizes this need, but

    does not describe the in-circuit flow. (A description

    of the in-circuit flow, however, is available in another

    Cadence document.)

    It is a major goal of the AVF that verification IP

    developed for one phase or step carries forward andis reused in other phases, thus reducing the overall

    work load. In addition, it is important for design

    teams to be able to reuse old tests and verification IP

    from previous projects. However, the AVF does not

    advocate that a verification team throw out their

    existing testbench methodologies. The flow is

    intended to complement and enhance an existing

    verification methodology in an evolutionary manner.

    VERIFICATION PLANNING

    The success of any project depends, first, on a well-

    developed plan, followed by efficient execution of

    the plan. The AVF starts with a planning phase thatprovides a roadmap for verifying an SoC design. It

    documents the goals for verification, the methods and

    technologies that will be used to achieve those goals,

    and the tests that need to be developed. The plan is

    used throughout the flow to track progress and ensure

    that the goals are being achieved. It chooses which

    of the flow phases will be followed and describes the

    details of each step in each phase. Finally, the plan

    defines the functional verification completion

    requirements and tapeout criteria.

    The steps associated with the creation of a verification

    plan3 are as follows:

    Set verification goals Describes measurable

    functional verification goals.

    Develop the feature list Describes the system

    features to be verified.

    Define the verification environment Defines the

    environment to be used to verify the system features.

    TL FVP

    Verification plan

    Block-levelverification

    Integrate blockinto FVP

    Full-chip/systemverification

    In-circuit systemverification

    Major flowstep

    Dependencydirection

    Figure 1: Advanced Verification Flow

    Verification planning Specifies requirements, lists

    functionality to test, and lists the tests that must be

    created. The flow is developed to ensure that the

    testbench created is robust enough to handle all theverification challenges that the team will encounter.

    Creating the transaction-level functional virtual

    prototype (TL FVP) The FVP is an executable

    specification of the system that can be utilized during

    all phases of the flow and by the software group to

    start software development.

    Incisive Newsletter December 2004 | 3

  • 8/22/2019 Incisive Dec04

    5/21

    Define assertions at all levels Defines the assertions

    that will assist the designer by finding structural bugs

    earlier in the design cycle. Assertions are also useful for

    debugging by associating a problem to a specific area

    of RTL code.

    Create the list of tests Defines tests to be used to

    verify the system features.

    Develop the coverage and completion requirements

    Describes coverage metrics, goals, and functional

    verification completion requirements needed

    before tapeout.

    Determine resource requirements Defines resources

    needed to accomplish all the goals. Resources include

    engineers, machines, software, and tools-and their

    quantities-needed for verification.

    CREATING THE TRANSACTION-LEVEL FVP

    After the planning process, the next stage of the

    flow is to develop an FVP. The FVP is an executable

    specification of the design. It can be used to evaluate

    the system architecture, make performance tradeoffs,and develop low-level software, and it ultimately

    defines the final SoC functionality and partitioning.

    The FVP contains transaction-level models (TLMs),

    system-level tests, application checkers, and

    performance monitors. It is typically written in C, C++,

    or SystemC. The SystemC library was created to make

    system modeling easier.

    In creating the FVP, the verification team begins

    creating the testbench and system-level tests early in

    the design cycle. Both software and hardware teams

    use the FVP during development. The software team

    can begin writing and executing code against the FVP,

    maturing the software side of the system earlier in thedesign cycle.

    The steps associated with the creation of an FVP are

    as follows:

    Create digital and analog TLMs TLMs are the design

    blocks in abstracted form. TLMs run faster and enable

    system testing earlier in the design cycle.

    Create system tests Develops system directed and

    constrained random tests.

    Create application checker The application checker

    verifies the high-level functionality of the system.

    Develop and integrate software Boot code,

    diagnostics, and hardware drivers can be developed

    and tested using the FVP.

    Create test harness The test harness connects

    everything and enables you to run a simulation with

    the correct configuration.

    Compile and link Defines the compile and link

    process where testing control is defined.

    Run tests and softwareDefines the run process

    and how to control the tests to run and their setup,

    scenarios, and configurations from the command line.

    Debug the FVPDefines the process for verifying your

    FVP environment.

    Synchronize with the verification plan Monitors

    progress against the verification plan by promoting

    test, coverage, bug, and schedule reviews.

    BLOCK-LEVEL VERIFICATION

    Either after or concurrent with the creation of the FVP,

    one or more design blocks must be verified. Because

    designs are complex, SoC design practices require that

    functionality be split up into blocks with well-defined

    interfaces. A design block can be any logical or

    functional partition in the design. SoC design blocks

    are typically between 30-100K gates in size and are

    partitioned with well-defined or standard interfaces

    on the exterior.

    The steps associated with block-level verification are

    as follows:

    Perform static analysis on the blockDescribes the

    process for checking the RTL with a lint checker and

    assertion density evaluator before simulation is run.

    Accumulate testbench Describes the components

    needed to verify the block design, including

    transactors, interface monitors, response checkers,

    stimulus generators, and scoreboards. Some of these

    components need to be pre-planned in order to

    maximize acceleration performance later.

    Create or re-use tests Defines the directed and

    constrained random testing process.

    Add functional and interface assertions Defines the

    process of adding PSL structural checks, pre-existing

    protocol monitors, and checkers that check common

    design elements. All provide coverage informationas well.

    Create test harness Defines the test harness,

    which connects everything and enables you to run

    a simulation with the correct configuration.

    Compile and link Defines the compile and link

    process where testing control is defined.

    Run tests and software Defines the run process

    and how to control the tests to run and their setup,

    scenarios, and configurations from the command line.

    Debug the block Defines the process for verifying

    your block-level design.

    Check code and functional coverage of the blockDescribes the process for performing code and

    functional coverage analysis.

    Accelerate block to smooth the path for system

    verification This outlines the process for getting

    the block ready for hardware acceleration. It confirms

    that the RTL can be synthesized into the hardware

    accelerator and will behave correctly when accelerated.

    4 | Incisive Newsletter December 2004

  • 8/22/2019 Incisive Dec04

    6/21

    Incisive Newsletter December 2004 | 5

    Synchronize with the verification plan Monitors

    progress against the verification plan by promoting

    test, coverage, bug, and schedule reviews.

    INTEGRATING A BLOCK INTO THE FVP

    Once a block has been verified in its own environment,

    it is ready to be placed in the system environment and

    tested against the FVP. Integrating a design block into

    the FVP is the next step in the flow.

    The steps associated with the integration of a block

    into the FVP are as follows:

    Modify test harness Adds the RTL block to the FVP

    Compile and link Defines the compile and link

    process where testing control must be defined

    Run the FVP tests and software Defines the run

    process and how to control the tests to run and their

    setup, scenarios, and configurations from the

    command line

    Debug the block in the FVP environment Defines

    the process for verifying your block-level design inthe FVP environment

    Synchronize with the verification plan Monitors

    progress against the verification plan by promoting

    test, coverage, bug, and schedule reviews

    FULL-CHIP OR SYSTEM VERIFICATION

    The verification of a full chip or a system is an

    iterative process that focuses on the interconnection

    of blocks in the system. For example, you can start

    the process by connecting the processor to a couple

    of key blocks that transfer data. By doing this, these

    key blocks can be configured and basic data flow

    across these blocks can be tested. Next, attachanother block and test its processor connection and

    the connection to one of the blocks in the testbench.

    This process continues until all blocks are integrated.

    The testbench evolves through this process.

    Throughout the debug process you are modifying

    and adding components, assertions, and tests to the

    testbench to further verify the full chip or system.

    The steps listed below outline the flow for a single

    iteration through the verification of the full chip or

    system. As you incrementally integrate blocks in the

    system, steps (such as accumulate testbench, create

    more tests, add more assertions, and add more

    coverage checks) are revisited to increase thefunctional coverage of the testbench.

    Accumulate testbench Describes the components

    needed to verify the block design, including

    transactors, interface monitors, response checkers,

    stimulus generators, and scoreboards. Some of these

    components need to be pre-planned to enable

    hardware acceleration at a later stage.

    Create more tests Describes the process of testing

    functionality, interconnection of blocks, and corner

    cases using constrained random techniques.

    Create more assertions Defines the process of adding

    PSL structural checks and utilizing pre-made checkers

    that check functions and interfaces.

    Add more coverage checks Describes the process of

    adding functional coverage checkers (in some cases

    automatically obtained with assertion checkers).

    Integrate software HW/SW co-simulation is an

    important part of verifying the full chip or system.

    There is increasing competitive pressure to develop

    and integrate software (such as boot code, diagnostics,

    and hardware drivers) earlier in the cycle.

    Implement transaction-based acceleration

    Simulation acceleration combines a software simulator

    running on a workstation with a hardware accelerator.

    During simulation acceleration, the design under

    verification (DUV) is synthesized and realized in

    the accelerator while any non-synthesizable objects

    (testbenches, verification components, tests) are runon the software simulator.

    Create test harness Defines the test harness,

    which connects everything and enables you to run

    a simulation with the correct configuration.

    Compile and link Defines the compile and link

    process for both simulation and acceleration and

    defines testing control.

    Run full regression suite and software Defines the

    run process for both simulation and acceleration, and

    shows how to control the tests to run and their setup,

    scenarios, and configurations from the command line.

    Debug the chip or system Defines the process forverifying your full-chip design.

    Check coverage Describes the process for performing

    code and functional coverage analysis.

    Synchronize with the verification plan Monitors

    progress against the verification plan by promoting

    test, coverage, bug, and schedule reviews.

    Perform completion analysis Describes the process

    for reviewing the project and determining if the

    design is ready for tapeout.

    IN-CIRCUIT VERIFICATION

    The next logical step for systems that must be attachedto hardware is in-circuit emulation. The AVF does not

    go into any more detail on the in-circuit verification

    step, because this step is a flow within itself.

  • 8/22/2019 Incisive Dec04

    7/21

    6 | Incisive Newsletter December 2004

    REFERENCE DESIGN ENVIRONMENTS

    To illustrate the AVF and highlight the advanced

    verification methodologies it promotes, we took an

    existing design and created top-down and bottom-up

    verification environments. The design project will

    be referred to as Eagle. The Eagle design is a

    networked video processing chip that can be used in

    a networked picture frame, for example. It is an SoC

    design that consists of multiple processors, hardwareIP blocks, and firmware.

    HIGHLIGHTS

    Multiple processor implementation

    2GB (total) external addressing for data, code,

    and video memory

    External SRAM and DRAM interface

    56-bit encrypt/decrypt engine (DES)

    JPEG decompression engine

    Standard buses

    FEATURES

    Two 32-bit LEON RISC processors

    Controller

    DSP

    Decryption engine

    VGA controller

    AMBA AHB/APB-compliant internal bus

    2-channel DMA controller with dual 10/100 Ethernet

    MACs

    Firmware for system control and JPEG decompression

    EXTERNAL INTERFACES

    Dual 10/100 Ethernet port

    VGA/LCD port

    Standard serial interface

    The internal block structure of the Eagle design is

    shown in Figure 2.

    Encrypted and compressed video streams are

    passed to the chip over two Ethernet channels to

    corresponding Media Access Control (MAC) blocks.

    Inside each MAC block, the Ethernet frame is

    decomposed and the payload (video stream) is

    placed in a FIFO. The control processor runs firmware,

    hardware drivers, and applications. Software running

    on the processor controls the processing of video

    frames. The DMA block is programmed and transfersthe video frame to the DES blocks FIFO. The DES

    block decrypts 64 bits of video using a 56-bit key. The

    DSP then decompresses the video image and renders

    it for VGA display. Following this transformation, the

    video image is streamed out the VGA port in rows to

    the monitor for display.

    CREATION OF THE TRANSACTION-LEVEL FVP

    For the Eagle design, the transaction-level FVP was

    created using the same block-level view as the planned

    RTL. Each block was either represented as a

    transaction-level model (TLM) in SystemC, or as an

    instruction set simulator (ISS) for the processing units.We created the TLMs to be functionally and register

    accurate. Because they are abstracted versions of the

    real design, they simulate very fast. We were able to

    develop software using this environment as well. There

    are two software routines that execute within Eagle.

    The first is the master controller firmware, which

    performs the main functions of controlling data

    transfers and housekeeping functions. The second

    program performs JPEG decompression. These

    programs are written in C, compiled for the

    appropriate machine architecture, and transformed

    into memory load files.

    The development of the transaction-level FVP allowedearly development of the firmware. In addition, the

    TLMs were created with the expectation that they

    would be utilized as reference models in the RTL

    testbenches. Lastly, the verification components in this

    environment were reused later for the full-chip

    testbench. The FVP environment allowed us to create

    and debug parts of our testbench environment far

    earlier than we would have normally been able to.

    The testbench created for the transaction-level FVP is

    shown in Figure 3. Software and video images are

    loaded into the simulation. The DES, DSP, AHB, DMA,

    VGA, and memory are all TLMs.

    Controllerprocessor(VHDL)

    MAC 1(Verilog)

    FIFO memory(8KB)

    MAC 2(Verilog)

    FIFO memory(8KB)

    VGA (VHDL)

    AHB/Wishbone

    Video RAM

    DMA (Verilog)

    Memory

    DSP (VHDL)

    2 FIFOs

    DES (Verilog)

    AMBAAHB

    Figure 2: Eagle block-level structure

  • 8/22/2019 Incisive Dec04

    8/21

    Incisive Newsletter December 2004 | 7

    The testbench works in conjunction with the software.

    Testbench components were created such that theycould be reused in the RTL simulation. The segmentor

    component loads a JPEG image from a file. It then

    segments it into specific sized chunks and sends it

    to the system test. The system test packs it into an

    Ethernet frame and sends the Ethernet transaction to

    the Ethernet/DMA TLM. Software then controls the

    processing of video frames. There are two points of

    reference for the testbench to check that the system

    is working correctly. The first opportunity is after

    decryption occurs and the JPEG image is sent back out

    the Ethernet channels. This video image is compared

    against the expected JPEG image for correctness. The

    second opportunity is when the image is sent out ofthe VGA interface in RGB format. This RGB data is

    stored and compared against an expected image for

    correctness. The simulation will fail if either of these

    checks fails.

    BLOCK-LEVEL VERIFICATION

    In the Eagle design, the DES block was newly created

    and therefore required a block-level verification

    environment. Figure 4 illustrates the block-level

    testbench for the DES block. Encrypted images

    (with an encryption key) are fed into the DES block,

    decrypted, and then sent back to memory. The

    testbench consists of directed and constrained randomtests; an AHB master transactor and assertion monitor;

    a response checker that utilizes the TLM from the FVP;

    and a segmentor that reads JPEG images into the

    testbench and segments them into payloads, which

    are bundled into Ethernet packets by the tests. The

    DES block has embedded assertions that check the

    design as it simulates and accumulate functional

    coverage statistics. This powerful testbench easily

    verified the DES block while reusing parts of the

    FVP and creating components and tests that can

    themselves be reused in the full-chip testbench.

    AHBassertionmonitor

    DES RTL

    AHB master

    transactor

    Segmentor

    Random test

    Test 2

    Test 1

    JPEGimage

    Scenariofile

    AHB

    FileVIPTLMs

    Transactionchannel

    VerificationIP

    RTL

    Signal

    Responsechecker

    DES TLM

    Figure 4: DES block-level testbench

    EthernetDMA

    DSPDES RTLISS

    VGA Memory

    AHB

    Segmentor

    System testSystem test

    System test

    JPEGimage

    JPEGimage

    Codeimage

    Codeimage

    Compare ComparePass? Pass?

    VGA

    Memoryload

    Memoryload

    MII

    Videoimage

    Exp. videoimage

    AHB master

    FileVIPTLMs

    Transactionchannel

    VerificationIP

    Figure 5: Integration of the DES block into the transaction-

    level FVP

    INTEGRATING THE BLOCK INTO THE TRANSACTION-

    LEVEL FVP

    Once the FVP is created, RTL blocks that have been

    verified in a standalone testbench can be integrated

    with the system and run against the software. This

    is huge advantage to this methodology. If you are

    developing multiple blocks, you no longer have to wait

    for them all to be developed before you integrate

    them with the system environment. Each block can be

    integrated on it own and run with system tests and

    software. Figure 5illustrates this phase.

    EthernetDMA

    DSPDESISS

    VGA Memory

    AHB

    Segmentor

    System testSystem test

    System test

    JPEGimage

    JPEGimage

    Codeimage

    Codeimage

    Compare ComparePass? Pass?

    VGA

    Memoryload

    Memoryload

    MII

    Videoimage

    Exp. videoimage

    FileVIPTLMs

    Transactionchannel

    VerificationIP

    Figure 3: Eagle transaction-level FVP

  • 8/22/2019 Incisive Dec04

    9/21

    8 | Incisive Newsletter December 2004

    In the Eagle design, we integrated the DES RTL into a

    transaction-level wrapper. This required the use of the

    AHB master transactor to translate the transactions

    being sent by the AHB TLM into the signal-level

    protocol on the DES block. The normal FVP system-

    level tests and software were then executed without

    modification to further test the DES block.

    VERIFICATION OF THE FULL CHIPBy the time we approached the verification of the full-

    chip Eagle design, we had already created much of the

    verification environment. The transaction-level FVP,

    segmentor, system tests, and response checker from

    the other testbench environments were reused

    (unchanged) in the full-chip testbench. The full-chip

    testbench is illustrated in Figure 6.

    The same segmentor and system tests are reused

    from the FVP environment. These have already been

    debugged and work without modification in this

    environment. The system tests pass JPEG-packed

    Ethernet frames to the MII master transactor. The

    MII and VGA transactors had to be developed for

    this environment. The transactor passes the Ethernet

    frames to the MAC blocks via the signal protocol. Just

    like in the transaction-level FVP environment, thesame two points of reference exist for the testbench

    to check that the system is working correctly. We

    used a response checker in both cases to compare

    actual results against expected results. We created a

    generic response checker that could be used to check

    different transactions. In this picture, there are two

    response checkers. One is being used to compare

    Ethernet frames from the design against expected

    Ethernet frames from the transaction-level FVP. The

    other is checking RGB data from the design against

    expected RGB also arriving from the FVP. Both

    response checkers are the same piece of code. We

    utilized the template construct in C++ to create one

    component that can be instantiated and used to

    compare different transactions arriving from different

    parts of the design.

    Our simulations at the full-chip level include the

    software. We were able to integrate the ISS into

    this environment by adding a master transactor at

    the interface to capture the reads and writes from

    the ISS and translate these into AHB reads and writes

    on the signals.

    SUMMARY

    The practical application of the Advanced Verification

    Flow varies depending on the system being developedand the team doing the implementation. The AVF

    was developed in a modular way such that design

    and verification teams can adopt the methodology

    incrementally. If you have been struggling with

    response checking and do not create TLMs today, this

    might be one starting point for incorporating the AVF.

    But equally, there are two dozen other ways to start

    using the AVF incrementally in your own environment.

    Todays larger chips and shorter design cycles have

    been the impetus for Cadence to step up and provide

    technologies that remove the verification bottleneck.

    The Incisive platform provides key verification

    technologies such as the IUS simulator kernel, HAL,SystemC, SystemVerilog, PSL, the Incisive Assertion

    Library, the verification IP library, functional coverage,

    code coverage, and the SimVision debug environment.

    This powerful platform in conjunction with the

    Advanced Verification Flow makes the Cadence

    methodology-based solution very effective in solving

    a companys toughest verification challenges.

    DSPDESProcessor

    EthernetDMA

    VGA

    AHB

    Segmentor

    System testSystem test

    System test

    JPEGimage

    Codeimage

    Codeimage

    Pass?

    VGA

    Memoryload

    Memoryload

    MII

    MII master

    MII master

    MII monitor

    MII

    MII monitor VGA monitor

    FVP

    MII responsechecker

    VGA responsechecker

    FileVIPTLMs

    Transactionchannel

    VerificationIP

    RTL

    Signal

    Figure 6: Full-chip testbench

  • 8/22/2019 Incisive Dec04

    10/21

    Incisive Newsletter December 2004 | 9

    The Cadence Methodology Engineering team has

    created a reference design to demonstrate the AVF

    and industry-wide leading-edge methodologies. This

    reference environment was created using a wide

    variety of techniques and technologies and is being

    used by customers as they adopt new techniques or

    technologies from the Incisive platform. Cadence

    offers the AVF on a CD that contains the following:

    AVF tutorial

    Documentation

    AVF description

    AVF user guide

    AVF detailed reference manual

    Reference design and examples

    Verilog/VHDL RTL source code with assertions

    SystemC testbench source code

    Verification IP

    AHB master, slave and monitor, MII master and

    monitor, and VGA monitor transactors DES and VGA response checkers

    Constrained random and directed tests

    Instruction set simulator

    SoC development project directory structure

    Scripts and Makefiles

    The Advanced Verification Flow provides a roadmap

    for verification, from project conception to system

    verification, and it can be adopted incrementally. It

    enables teams to start verification of their designs

    much earlier and enhances design checkpoints to

    isolate bugs quicker. This methodology promotesdesign reuse across the verification environments,

    thus requiring less code. The AVF maximizes

    verification effectiveness of your environment

    for each cycle run, providing more bang for your

    buck. It helps teams successfully develop and utilize

    advanced verification methodologies and associated

    techniques by example and with documentation. You

    can utilize the code, scripts, and infrastructure of the

    reference design to jump start your verification work.

    REFERENCES

    [1] Its About Time: Requirements for the Functional

    Verification of Nanometer-scale ICs; Lavi Lev, Rahul Razdan,

    and Christopher Tice; Cadence Design Systems white paper.

    [2] Professional Verification: A Guide to Advanced

    Functional Verification; Paul Wilcox, Boston: Kluwer

    Academic Publishers (KAP), 2004; ISBN: 1-4020-7875-7.

    [3] Principles of Functional Verification; Andreas S. Meyer;Elesevier Science, 2004; ISBN: 0-7506-7617-5.

    RESOURCES

    System-on-Chip Verification Methodology and Techniques;

    Prakash Rashinkar, Peter Paterson, and LeenaSingh; Kluwer

    Academic Publishers (KAP), 2001; ISBN 0-7923-7279-4.

    Advanced Verification Techniques; Leena Singh, Leonard

    Drucker, and Neyaz Khan; Kluwer Academic Publishers

    (KAP), 2004; ISBN 1-4020-7672-X.

    Writing Testbenches, Second Edition; Janick Bergeron;

    Kluwer Academic Publishers (KAP), 2003; ISBN 1-4020-

    7401-8.

  • 8/22/2019 Incisive Dec04

    11/21

    FPGA-based emulation is widely understood by

    engineers because they are accustomed to designing

    with FPGAs. Much less familiar are processor-based

    emulators and ample amounts of misinformation

    abound. This article will attempt to remove the

    mystery surrounding processor-based emulation and

    to explain how design constructs are mapped into it.

    PROCESSOR-BASED EMULATOR

    ARCHITECTUREIn the early 1990s, IBM pioneered a different

    emulation technology that was an offshoot of earlier

    work they had done in hardware-based simulation

    engines. To understand how a processor-based

    emulator works, it is useful to briefly review how

    a logic simulator works. Recall that a computers

    arithmetic-logic unit (ALU) can perform basic Boolean

    operations on variables (e.g., AND, OR, NOT) and that

    a language construct such as always @ (posedge

    Clock) Q = D forms the basis of a flip-flop. In the

    case of gates (and transparent latches), simulation

    order is important. Signals race through a gate chain

    schematically left to right so to speak, or top tobottom in RTL source code. Flip-flops (registers)

    break up the gate chain for ordering purposes.

    each chain to an ALU, thus parallelizing the process

    and reducing the time required, perhaps to one half.

    A processor-based emulator has from tens of

    thousands to hundreds of thousands of ALUs, which

    are efficiently scheduled to perform all the Boolean

    equations in the design in the correct sequence. The

    following series of drawings illustrate this process.

    Step 1: Reduce logic to four input Boolean functions

    PROCESSOR-BASED EMULATION

    BY RAY TURNER, CADENCE DESIGN SYSTEMS

    03

    Load E

    Invert

    AND D

    AND C

    OR B

    STORE A

    6 CPU clock cycle

    A = B + C D E

    Figure 1: Logic simulation CPU does Boolean math on

    signals and registers

    One type of simulator, a levelized compiled logic

    simulator, performs the Boolean equations one at

    a time in the correct order. (Time delays are not

    relevant for functionallogic simulation.) If two ALUs

    were available, you can imagine breaking the design

    up into two independent logic chains and assigning

    Original circuitA

    A

    B

    B

    C

    C

    E

    EF

    F

    G

    GH

    H

    J

    J

    M

    MN

    NKL

    LS

    S

    P

    P

    R

    R

    Clock

    F/F

    CLK

    Clock

    F/F

    CLK

    Clock

    F/F

    CLK

    Clock

    F/F

    CLK

    K

    The set of Boolean equations after reducing the logic

    to four input functions is:

    IF (Clock is rising) C = A

    S = C & B & E & F

    M = NOT (G + H + J + S)P = NOT (N + NOT (M & K & L) )

    IF (Clock is rising) R = P

    Additionally, the following sequencing constraint

    set applies:

    The flip-flops must be evaluated first

    S must be calculated before M

    M must be calculated before P

    Primary inputs B, E, and F must be sampled

    before S is calculated

    Primary inputs G, H, and J must be sampled

    before M is calculated

    Primary inputs K, L, and N must be sampled

    before P is calculated

    Note: primary input A can be sampled at any time

    after the flip-flops

    10 | Incisive Newsletter December 2004

  • 8/22/2019 Incisive Dec04

    12/21

    One possible scheduling is shown above. (This is done

    for illustrative purposes and does not necessarily

    reflect the most efficient scheduling.)

    Step 3: Result of scheduling logic:

    SPECIFIC TECHNOLOGY IN THE

    PALLADIUM SYSTEM

    Each custom ASIC in a Palladium system implements

    a number of Boolean processors (see Table 1 below).

    These ASICs are assembled with additional memory

    dies onto a ceramic multi-chip module (MCM). Each

    Palladium emulation boardcontains a number of

    interconnected MCMs. A Palladium IIsystem can

    have up to 16 emulation boards for a total of 884,736

    processors. The memory on each MCM can be flexibly

    shared between modeling design memory and logic

    analyzer trace memory.

    Calculate C Sample B Sample G Calculate R1

    Sample E Sample H Sample N2

    Sample F Sample J3

    Sample K4

    Calculate S Sample L5

    Calculate M6

    Receive M7

    8

    9

    Sample A Calculate P10

    Processor2

    Timestep

    Processor1

    Processor3

    Processor4

    Processor5

    Step 2: Schedule logic operations among processors

    and time steps:

    Processor 1Step 1

    A

    B

    C

    EF

    GHJ

    M

    NKL

    S

    P

    RClock

    F/F

    CLK Clock

    F/F

    CLK

    Processor 2Step 5

    Processor 3Step 6

    Processor 5Step 10

    Processor 5Step 1

    In addition to having the Boolean equations

    represent the design logic, the emulator must also

    efficiently implement memories and support physical

    IP bonded-out cores. For simulation acceleration, itmust also have a fast, low-latency channel to connect

    the emulator to a simulator on the workstation, since

    behavioral code cannot be synthesized into gates in

    the emulator. For in-circuit operation, the emulator

    needs to support connections to a target system and

    test equipment. Finally, to provide visibility into the

    design for debugging, the emulator contains a logic

    analyzer (which will be described in a future article).

    IP corehosting

    Verilog and VHDL

    High-speed RTL compiler

    Behavioral

    Runtime controland debugenvironment

    Workstation

    FullVision

    C/C++ Incisive sim

    SystemC HDL

    High-speedin-circuitemulationinterface

    Embeddedlogicanalyzer

    High-speedco-simulationdata pump

    Customprocessorarray

    Programmablememory array

    Palladium

    Figure 2: Processor-based emulator architecture

    Processors per ASIC 768256

    ASICs per MCM 21

    Processors per board 55,29616,640

    Processors per system (16 boards) 884,736266,240

    Maximum gates per board 16 million8 million

    Maximum gates per system 256 million128 million

    Memory chips per MCM 83

    GBytes per system 7468

    Domains per board (# users/bd.) 98

    Processor cycle time 5.3ns7.5ns

    Typical emulation speed 600KHz-1.5MHz

    300750KHz

    Maximum I/O pins per system 61,4404,224

    ASIC geometry (Leff) .07.12

    Transistors per MCM 1.4 billion100 million

    Palladium IIPalladium

    Table 1: Comparison of the original Palladium systemto Palladium II

    Figure 3: Palladium II processor module

    Incisive Newsletter December 2004 | 11

    The custom silicon for Palladium II systems is designed

    in-house and fabricated using IBMs 70-nanometer

    (Leff

    ), 8-layer copper process (see Figure 3).

  • 8/22/2019 Incisive Dec04

    13/21

    12 | Incisive Newsletter December 2004

    During each time step, each processor is capable

    of performing any four input logic functions using

    as inputs the results of any prior calculation of any

    of the processors and any design inputs or memory

    contents. Processors are physically implemented

    in clusters with rapid communication within those

    clusters. The compiler optimizes processor scheduling

    to maximize speed and capacity.

    DESIGN COMPILATION

    Compilation of an RTL design is completely

    automated through the following sequence:

    1. Map RTL code into primitive cells:

    gates, registers, etc.

    2. Synthesize memories

    3. Flatten the hierarchy of the design

    4. Reduce Boolean logic (gates) into four

    input functions

    5. Break asynchronous loops in the design

    by inserting a register at an optimal place6. Assign external connections for the target system

    and any hard IP

    7. Set up any instrumentation logic required

    (e.g., logic analyzer visioning)

    8. Assign all design input and output to processors

    in a uniform way

    9. Assign each cell in the design to a processor

    priority is given to assigning cells with common

    inputs and/or outputs to the same processor or

    cluster, and to assigning an equal number of cells

    to each processor

    10. Schedule each processors activity into sequential

    time steps the goal is to minimize the total

    number of time steps

    The compiler also has to take into account simulation

    acceleration connections, tri-state bus modeling,

    memory modeling, non-uniform processor

    connectivity, logic analyzer probing and triggering,

    and other factors. But the Palladium compiler doesnt

    have to cope with the highly variable FPGA internal

    timing in FPGA emulators. For this reason processor-

    based emulation compiles much faster and with

    fewer resources. The compiler maintains all originally

    designated RT-level net names for use in debugging,

    in spite of the Boolean optimization it performs. This

    allows users to debug with the signal names with

    which they are familiar. (Clock handling in Palladium

    technology was presented in the last newsletter.)

    TRI-STATE BUS MODELING

    Tri-state buses are modeled with combinatorial logic.

    When none of the enables are on, the Palladium

    system gives the user a choice of pull-up, pull-

    down, or retain-state. In the latter case, a latch is

    inserted into the design to hold the state of the bus

    when no drivers are enabled. If multiple enables are

    on, then for pull-up and retain-state logic 0 will

    win, and for pull-down logic 1 will win. (Note:

    this is a good place to use assertions.)

    ASYNCHRONOUS LOOP BREAKING

    Since the Palladium system does not model gate-level

    timing (delays are 0), asynchronous loops are brokenautomatically by a delay flip-flop during compilation.

    The Palladium compiler will automatically break

    asynchronous loops without user intervention.

    However, by allowing the user to specify where loop

    breaks should occur, Palladium performance may be

    enhanced, since the performance of the Palladium

    system is related to the length of long combinatorial

    paths. By inserting delay elements to break false

    paths or multiclock-cycle paths, performance can

    be improved if these paths are the critical paths of

    the design. The breakNetand breakPin commands

    provide a way of breaking loops or paths without

    editing design sources.

    LONG COMBINATORIAL PATHS

    Palladium processors operate from an instruction

    stack of 160 words (Palladium II) or 256 words

    (Palladium). These are the time steps into which the

    design calculations are sequenced. Occasionally a

    design may include a very long combinatorial path

    that cannot be scheduled into 160/256 sequential

    steps. This does not necessarily mean that the path

    has more than 160/256 gates, as scheduling

    must take many time sequence constraints into

    consideration. In such a case, the scheduler will

    complete the path by scheduling the remainingBoolean operations in a second pass using the

    unused processor time steps. A design with a long

    combinatorial path can also be the result of trying

    to squeeze too many gates into an emulator, but, as

    a benefit, it provides a tradeoff between emulation

    speed and capacity.

    SUMMARY

    Processor-based emulation has proven itself superior

    to FPGA-based emulation for every design goal

    or challenge: emulation speed, compilation speed,

    debug productivity, number of users, maximum

    capacity, and amount of memory (see Table 2).

    Compile speed is ten times faster on one-tenth

    the number of workstations minutes vs. hours.

    Emulation speed is faster on nearly all designs an

    average of 44% faster overall. The variety of debug

    vision choices provide much faster signal trace data

    display times six times faster on large designs.

    While FPGA-based emulators have been getting

    slower in emulation speed from generation to

    generation, processor-based emulators have been

    getting faster with each new generation.

  • 8/22/2019 Incisive Dec04

    14/21

    Incisive Newsletter December 2004 | 13

    Processor-based emulators have demonstrated thatthey are equally capable of handling a large number

    of asynchronous clocks in a design without suffering

    a performance impact. The ability of processor-based

    emulators to instantly probe new signals and change

    trigger conditions without requiring a slow FPGA

    compile greatly improves the interactivity of

    debugging. Processor-based emulators offer very

    precise control over input/output timing, which may

    be crucial for target environment interfacing. The

    Palladium systems unique ability to support JTAG

    ports for software debuggers, while using FullVision

    for the hardware, delivers the highest speed for

    software verification. The design experiments facilityof processor-based emulators increases debugging

    productivity. Since users spend most of their design

    time debugging, processor-based emulators can

    deliver more design turns per day than FPGA-based

    emulators. These design benefits translate into

    shorter time to market for new products and higher

    product revenue.

    Compile speed 10-30 million gates/ hour on one CPU

    Takes many hourson many CPUs

    Emulation speed 600KHz-1.5MHz150-600KHz

    Predictable, reliablecompiles

    ExcellentFair

    Maximum capacity 256 million gates30-120 million gates

    Ability to handleasync. designs

    GoodGood

    Partitioning problems Found early inprocess

    Not found untilindividual FPGAcompiles fail

    Timing issues inemulation model

    Avoided completelyPotential for setupand hold issues andsublet, hard to findtiming problems

    Ability to make smallchanges quickly

    Can downloadincremental changesto processor

    Must dowloadentire FPGA

    Processor-basedemulators

    FPGA-basedemulators

    Table 2: Comparison of FPGA-based and processor-based

    emulators

  • 8/22/2019 Incisive Dec04

    15/21

    Assertion-based verification (ABV) is growing rapidly

    as engineers experience its many benefits. The value

    of ABV is immediately apparent to designers who

    easily find bugs that would have taken days to solve,

    and to verification engineers who can clearly see

    whether they have covered their test plans.

    ABV functions on the premise that defects are easier

    to fix when they are identified as soon after they

    happen as possible. For example, you are better off

    catching a FIFO overflow just as it happens, rather

    than trying to debug it once the wrong data has

    propagated to other blocks. Using assertions,

    designers can capture the intention of their designs

    (e.g., FIFOs should not overflow) and receive error

    messages if the design violates these intentions.

    While many engineers recognize the value of

    assertions, they often run into the chicken-or-the-

    egg problem of schedules. They cannot afford the

    time to learn assertion languages like PSL, so they

    dont reap the time-saving benefits that would occur

    later. Cadence has developed the Incisive Assertion

    Library to help solve this problem.

    The Incisive Assertion Library (IAL) is a library of

    50 Verilog/PSL modules that implement checksfor common design structures. For example, there

    is an IAL element that will catch the FIFO overflow

    mentioned above. The IAL helps engineers in

    two ways:

    1. Engineers can use the IAL modules as is

    and instantiate them into their designs. These

    engineers will immediately reap the benefits

    of assertions without needing to spend time

    learning PSL.

    2. The IAL is delivered as Verilog source. Engineers

    can use these modules as templates to create new

    functions that were not shipped with the IAL.

    This article will explain how to instantiate Incisive

    Assertion Library modules into your design so you can

    quickly take advantage of assertion-based

    verification.

    THE INCISIVE ASSERTION LIBRARY

    BY RAY SALEMI AND DAVE ALLEN, CADENCE DESIGN SYSTEMS

    04

    USING THE INCISIVE ASSERTION LIBRARY

    The Incisive Assertion Library ships with the Incisive

    Unified Simulator, beginning with version 5.4. You

    can find it in your Cadence installation under the

    tools/ial subdirectory. To use the IAL in your designs,

    youll need to do the following:

    1. Instantiate the IAL modules you want to use into

    your design.

    2. Run simulation with the appropriate command-

    line switches to enable specific IAL features.

    INSTANTIATING IAL MODULES

    IAL modules are simply Verilog modules, and you can

    instantiate them in one of two ways:

    1. You can instantiate the IAL modules inside of the

    module you want to test. With this approach, the

    IAL assertions will travel with the component

    youre testing, which may be useful if the block is

    re-used in other designs.

    2. You can instantiate the IAL modules outside of

    the object you want to test. This allows you to

    use IAL modules without modifying the original

    RTL. Figure 1 is an example of the second

    instantiation method.

    fifo #(DATA_WIDTH, ADDR_WIDTH, FIFO_DEPTH) duv_fifo (.reset_n(resetn),

    .clk(clk),

    .wr_en(enque)

    .rd_en(deque)

    .wr_data(in_data)

    .rd_data(out_data)

    );

    ial_fifo #(FIFO_DEPTH, DATA_WIDTH) ial_c1 (.reset_n(resetn)

    .clk(clk)

    .rd_fifo(deque),

    .wr_fifo(enque),

    .data_in(in_data),

    .data_out(out_data)

    );

    Figure 1: Instantiating an IAL element to test a FIFO

    Instantiating this FIFO IAL module automatically gives

    you two of the significant benefits of assertion-based

    verification. You get assertions that catch errors and

    you get functional coverage. Well look at these more

    closely in the following sections.

    14 | Incisive Newsletter December 2004

  • 8/22/2019 Incisive Dec04

    16/21

    SIMULATING WITH IAL MODULES

    Simulating with IAL modules is just like performing

    any other Verilog simulation. You simply compile the

    IAL modules along with the rest of your design, with

    the appropriate command-line switches.

    If you invoke your simulation using the single-step

    ncverilog command, youll need to add the following

    to your command line:-v /ial.v +assert

    The -v option specifies that the ial.v library file

    should be compiled with the rest of your design. The

    +assert option enables the PSL assertions in the

    Incisive Assertion Library. So, your simulation

    command would look like this:

    ncverilog -v /ial.v +assert

    If you use the three-step ncvlog-ncelab-ncsim

    command, youll need to add the following to your

    ncvlog command line:

    /ial.v -assert

    Your ncvlog command line would look like this:

    ncvlog /ial.v -assert

    The IAL modules have additional capabilities that can

    be enabled at compile time. In the first version, some

    modules have additional coverage information and

    can use a global reset signal. Table 1 shows the

    additional capabilities and the command-line

    switches needed to enable these capabilities:

    ASSERTIONS IN INCISIVE ASSERTION

    LIBRARY ELEMENTS

    The FIFO IAL elements contain the assertions shown

    in Figure 2.

    Enable coverageassertions

    -define IAL_COVERAGE_ON

    +define+IAL_COVERAGE_ON

    Define a globalreset signal

    -define IAL_GLOBAL_RESET=

    +define+IAL_GLOBAL_RESET=

    ncvlog switchncverilog switchDescription

    Table 1: Additional IAL module capabilities

    For the global reset signal, is the full

    hierarchical name of the signal you want to use.

    Figure 2: Assertions in the FIFO IAL elements

    Lets look at the ial_fifo_overflow assertion as an

    example. This assertion catches an error when the

    FIFO receives a write even though it has set its full

    signal to true. The code for this assertion is as

    follows:

    psl property ial_fifo_overflow =

    never (

    ((fifo_level == FIFO_DEPTH &&

    (`IAL_RESET_SIGNAL)) &&

    (({rd_fifo, wr_fifo}) == 2b01)

    );

    psl assert ial_fifo_overflow;

    This assertion states you should never see that the

    FIFO has reached its depth and receives a write signal.

    If you see this situation, then someone else is

    misusing the FIFO. In this case, the Incisive Unified

    Simulator will print the following error message:

    Ncsim: *E, ASRTST (time 4850 ns) Assertion

    top.duv.ial_c1.ial_fifo_overflow has failed

    This is a good example of the firewall techniques

    many designers use in their blocks. Designers place

    assertions at the input pins to their block to

    ensure that they are getting correct input. This canhelp reduce debug time designers can quickly

    determine if a defect is actually in their block or

    caused by incorrect input.

    Incisive Newsletter December 2004 | 15

  • 8/22/2019 Incisive Dec04

    17/21

    FUNCTIONAL COVERAGE

    The Incisive Assertion Library also automatically

    creates a test plan for the module being checked by

    creating functional coverage points. The functional

    coverage points for the FIFO example are represented

    in Figure 3.

    INCISIVE ASSERTION LIBRARY

    ELEMENT LIST

    The IAL contains 50 elements that introduce

    assertions to many common design elements. Figure 4

    shows the list of elements scheduled for release in

    January 2005:

    Figure 3: FIFO functional coverage points

    BASIC TYPESial_tristate

    ial_one_hot

    ial_one_cold

    ial_zero_one_hot

    ial_active_bits

    ial_valid_bis

    ial_constant

    ial_next

    ial_always

    ial_never

    ial_range

    ial_bitcnt

    ial_gray_code

    ial_hamming_dist

    ial_mutex

    ial_timeout

    INTERFACE TYPESial_follower

    ial_leader

    ial_seq

    ial_width

    ial_window

    ial_stall

    ial_together

    ial_handshake

    INTERFACE+CONTROL

    TYPES

    ial_outstanding_id

    ial_fifo

    ial_mclk_mport_fifo

    ial_multiplexer

    ial_arbiter

    ial_case_check

    ial_req_gnt_ack

    ial_var_time_seq

    ial_eq_frame

    DATAPATH TYPESial_arith_overflow

    ial_crc

    ial_decoder

    ial_encoder

    ial_decr

    ial_incr

    ial_max

    ial_min

    ial_delta

    ial_serial2parallel

    ial_parallel2serial

    ial_even_parity

    ial_odd_parity

    CONTROL TYPES

    ial_mem_access

    ial_mport_mem_access

    ial_stack

    ial_eq_seq

    Figure 4: Comprehensive assertion library

    As you can see, you can now tell whether you have

    tested the FIFO at its corner cases. The IAL element

    automatically tracks how many times youve performed

    certain tasks, like read from it, written to it, and filled

    it you automatically have a complete functional

    test plan for the FIFO.

    The Incisive Assertion Library functional coverage

    points are supported by the Incisive platform andcan be read and analyzed using Incisive functional

    verification tools.

    SUMMARY

    The Incisive Assertion Library is the fastest way to

    take advantage of the benefits of assertion-based

    verification. When you instantiate IAL elements,

    you immediately begin reaping the benefits of ABV

    without needing to learn PSL. When the time comes

    for you to create your own PSL-based checkers, IAL

    components will serve as a template to get you started.

    16 | Incisive Newsletter December 2004

  • 8/22/2019 Incisive Dec04

    18/21

    Did you know that SimVision enables you to

    synchronize time ranges between multiple waveform

    windows? That you can save time ranges and use

    them in any waveform window? Or that you can

    customize the zoom toolbar to display larger zoom

    buttons? All of these features are available in

    SimVision release IUS5.3 by simply using the zoom

    toolbar in the waveform window, represented in

    Figure 1.

    SAVING TIME RANGES

    Another useful feature of the zoom toolbar is theability to save time ranges and use them in all of the

    waveform windows. When you drop down the time

    range menu, you have the option to keep the current

    time range (see Figure 3).

    INCISIVE DEBUG AND ANALYSIS TIPS

    BY RICK MEITZLER, CADENCE DESIGN SYSTEMS

    05

    Incisive Newsletter December 2004 | 17

    Figure 1: Waveform zoom toolbar

    WAVEFORM TIME RANGE LINKING

    Employing the linking feature on the waveform

    window allows you to open multiple waveform

    windows that display different signals, but all at the

    same time range. When you change the time range

    on one waveform window (either by zooming ormoving to a different time range), the corresponding

    time will display on all other linked waveform windows.

    Just to the left of the time range entry in the toolbar

    is an icon with a broken link. Clicking this icon drops

    down a menu that lists all currently open waveform

    windows. By clicking on the window names in the

    menu, you can link the time ranges in the current

    window to the other windows (see Figure 2).

    Figure 2: Linking waveform time ranges

    Figure 3: Time range menu

  • 8/22/2019 Incisive Dec04

    19/21

    18 | Incisive Newsletter December 2004

    The time range menu also gives you the option to

    edit time ranges. When you select this option, a

    properties window will appear and allow you to

    modify the time ranges (see Figure 5). The in-place

    editing option on the properties page enables you

    to rename time ranges and change the start and

    end times.

    CHANGING ZOOM BUTTONS

    When the zoom toolbar is updated to include the

    above functionality, the zoom controls automatically

    change to a new format that conserves space and

    displays smaller zoom buttons. If you prefer the

    larger zoom buttons, however, they are still available

    to you through toolbar customization. First, select

    View>Toolbars>Customize to bring up the toolbar

    customization dialog, shown in Figure 6.

    Figure 4: Time range dialog

    Figure 5: Editing a time range in the properties window

    Figure 6: Toolbar customization dialog

    Once this dialog box is open, check the Zoom box

    in the left pane. The right pane will list all the items

    on the toolbar that can be modified (see Figure 7).

    If you want to hide the new compressed zoom

    buttons, simply clear the checkbox labeled

    zoom_controls and then check the box next to

    the zoom buttons you want to show. As you checkor uncheck boxes, the window will preview how the

    new toolbar will look.

    As you can see, the zoom toolbar on the waveform

    window has a number of handy options in a small

    space. For more information about the various

    features of SimVision release IUS5.3, please refer

    to our online SimVision documentation.

    Figure 7: Modified zoom toolbar

    A dialog box will appear, allowing you to name your

    time range and, if desired, change the start and end

    times (see Figure 4). After you save a time range, that

    time range will be available in all waveform windows

    in the dropdown menu.

  • 8/22/2019 Incisive Dec04

    20/21

    INCISIVE PLATFORM NEWS AND EVENTS

    PRESS RELEASES

    New Palladium II Extends CadenceAcceleration/Emulation Leadership

    October 25, 2004

    Cadence Delivers Unparalleled Speed and Capacity to Tackle

    the Most Complex SoC Verification

    Cadence Announces Comprehensive Assertion-based Verification Solution

    October 18, 2004

    Expanded Support of PSL and SystemVerilog Assertions

    Enables More Efficient Verification

    Cadence Incisive Conformal Technology BecomesStandardized Solution for Fujitsu Worldwide

    September 7, 2004

    Deployment Helps Fujitsu Speed Time to Market and

    Maximize First Silicon Success of Highly Complex Chips for

    Multimedia, Consumer and Communications Applications

    NEW COLLATERAL

    White paper: Accelerated Hardware/SoftwareCo-verification Speeds First Silicon and First Software

    http://www.cadence.com/whitepapers/Coverification_wp.pdfDemonstration: Cadence Palladium IIAccelerator/Emulatorhttp://www.demosondemand.com/clients/cadence/014/dod_

    page/previews.asp#4

    WORKSHOPS

    This quarter, Cadence is offering the followingIncisive platform workshops:

    Developing a SystemC Testbench

    Developing Self-checking Designs Using Assertions

    WEBINAR

    Using Assertions to Check Your Designs

    This webinar focuses on ways to increase the efficiency

    of verifying your complex HDL code.

    http://www.cadence.com/webinars/webinars.aspx?xml=abv

    Incisive Newsletter December 2004 | 19

  • 8/22/2019 Incisive Dec04

    21/21

    2004 Cadence Design Systems, Inc. All rights reserved. Cadence, the Cadence logo, Conformal, NC-Verilog, Palladium, and Verilog are registered trademarks and Incisive is a trademark of Cadence Design Systems, Inc. All others are

    properties of their respective holders.

    5693 12/04

    Cadence Design Systems, Inc.

    Corporate Headquarters

    2655 Seely Avenue

    San Jose, CA 95134

    800.746.6223

    408.943.1234

    www.cadence.com


Recommended