+ All Categories
Home > Documents > Verification of Configurable Processor Cores Marines Puig-Medina, Gulbin Ezer, Pavlos Konas Design...

Verification of Configurable Processor Cores Marines Puig-Medina, Gulbin Ezer, Pavlos Konas Design...

Date post: 21-Dec-2015
Category:
View: 215 times
Download: 0 times
Share this document with a friend
Popular Tags:
15
Verification of Configurable Processor Cores Marines Puig-Medina, Gulbin Ezer, Pavlo s Konas Design Automation Conference, 2000 Page(s): 426~431 presenter: Peter 2000/11/06
Transcript

Verification of Configurable Processor Cores

Marines Puig-Medina, Gulbin Ezer, Pavlos Konas

Design Automation Conference, 2000

Page(s): 426~431

presenter: Peter 2000/11/06

What’s the problem?

The verification methodology for configurable processor cores.

Simulation-based approach uses directed diagnostics and pseudo-random program generators.

A configurable and extensible test-bench for SOC verification.

Coverage analysis provided.

Introduction The processor core should contain onl

y the necessary functionality (defining and incorporating new instructions) so that it consumes little power,small area,high performance. (Tensilica)

A robust and flexible methodology for the verification of the processor (for architectural and micro-architectural testing)

Configurable processor

Xtensa: enable configurability, minimize code size, reduce power, and maximize performance.

The processor generator include: RTL code and a test-bench; a C compiler, an assembler, a linker, a debugger, a code profiler, an ISS.

Functional verification

Test program generation

Using the “Perl” scripts (an OO based verification language)

AVP(architectural verification program): testing the execution of each instruction in the ISA.

MVP(micro-architectural): testing features of the Xtensa implementation.

Random test program.

The examples

Co-simulation(1)

The comparison process is implemented in Vera-VHL (from Synopsys Inc.)

There are three major advantages:– allows fine-grain checking through processor st

ates during simulation.– Constructing a comprehensive self-checking di

agnostic is considerably.– Stop the simulation at, or near, the cycle where

the problem appears.

Co-simulation(2)

The biggest challenges: finding the appropriate synchronization points between models at different levels.– In Xtensa, the interrupt latency can’t be repro

duced by ISS model.– Masking off comparisons when the processor

state is architecturally undefined.

The test-bench

Coverage

Employing ISS monitors (written in Perl ) that check the architectural level coverage.

Using Vera monitors to check RTL state and micro-architectural features.

Using “HDLScore” (a program-based coverage tool), Vera FSM monitors.

The examples(1)

Proc1: only part of the available option.

Proc2: represents a maximum configuration.

Proc3: a randomly generated configuration.

The example(2)

Conclusion(1)

Present methodology for generating AVP and MVP(Perl script)

Outline the coverage analysis methodology.(based on Vera)

The author is working on expanding the coverage analysis framework and the random diagnostic test-program generator.

Conclusion(2)

Measuring coverage is only useful if the results of the analysis are conveyed back to the verification and design teams and they are used to improve the verification process.

The coverage tool: Perl, Vera (Synopsys), Verification Navigator( TransEDA)


Recommended