+ All Categories
Home > Documents > By Mark olen and Matt Ballancee take Verification to the ... · 18 • March 2008 chip Design • A...

By Mark olen and Matt Ballancee take Verification to the ... · 18 • March 2008 chip Design • A...

Date post: 20-May-2018
Category:
Upload: vanminh
View: 216 times
Download: 1 times
Share this document with a friend
3
Methods-Tools VERIFICATION 18 • March 2008 Chip Design • www.chipdesignmag.com A s design complexity has increased, design verification has become an increasingly difficult problem. Today, project teams spend more time verifying their designs than they do creating them. To bring verification costs back into line with design costs, teams need test benches that find more bugs with less engineering investment. e answer lies in intelligent testbench automation. is type of automation generates optimal sets of tests to improve both productivity and effectiveness. rough a unique, rule-driven test-generation technology, intelligent testbench automation delivers many benefits. For example, it cuts design respins by using rules to generate significantly more unique tests than are possible with conventional methods. In addition, intelligent testbench automation reduces testbench bugs by minimizing the amount of testbench code that engineers need to write. It also avoids the most time-consuming testbench edits through a testbench redirection facility. is facility allows engineers to change their simulation goals without altering their test implementation code. Intelligent testbench automation also eliminates duplicate testing. It utilizes test-generation algorithms that use closed- loop adaptive coverage algorithms, which only produce unique tests. By providing dynamic design-resource allocation and facilitating layered testbench modules, this type of automation also retargets module-level testbenches at the system level without changing the testbench environment. Finally, intelligent testbench automation enables testbench-module reuse through functionality sub-setting and adaptive morphing technology. OPTIMAL TESTS FOR ELECTRONIC DESIGNS For the electronic-design-automation (EDA) industry, the need for better testbenches has resulted in considerable investment in testbench-related research and development. Such R&D has largely focused on improving the coding languages that engineers use to write their testbench programs. Language improvements have made a noticeable difference. Because design complexity has continued to increase, however, the verification problem has been simultaneously getting more difficult. Yet another new language would offer incremental improvements at best. An entirely new type of testbench based on rule sets is a more fundamental advance. By Mark Olen and Matt Ballancee Essentially, a rule set describes how a high-level testing activity can be performed as a series of lower-level testing activities. A modest number of rules—taken together—can describe a very large set of tests. In fact, a well-chosen set of rules can compactly define how to run every test scenario that engineers might want to simulate during a digital design project—no matter how large or complex. A few pages of rules can easily define test sets containing thousands, millions, or even billions of tests. e result is a substantial savings in verification engineering cost. Rule sets also make it possible to employ an entirely new class of algorithms. Intelligent testbench automation provides algorithms that generate optimal sets of tests specifically for engineers verifying electronic designs. REDIRECTED TESTBENCHES CUT CODE With rule-based algorithmic test generation, every simulation run has a purpose that’s captured in a verification goal. A verification goal makes references to relevant parts of a set of rules. ose rules specify a particular verification objective. For example, a verification goal might be to test all of the firmware instructions that are intended to execute conditionally. In doing so, one would verify that the instructions react correctly to each possible condition. When a simulation starts, intelligent testbench generation initializes along with the simulator. It begins generating tests per the verification goal. As simulation proceeds, the algorithms monitor the progress toward the verification goal, and to avoid Take Verification to the Next Level with Intelligent Testbench Automation With intelligent testbench automation, it’s possible to bring verification costs back in line while improving productivity. Figure 1: Unlike traditional testbenches, algorithmic testbenches can be used across different design projects. They also can be retargeted at different levels of a design’s hierarchy and redirected to generate different tests based on the verification engineer’s goals for the next simulation run.
Transcript

Met

hods

-Too

ls

Ver

ific

aTio

n

18 • March 2008 chip Design • www.chipdesignmag.com

As design complexity has increased, design verification has become an increasingly difficult problem. Today, project

teams spend more time verifying their designs than they do creating them. To bring verification costs back into line with design costs, teams need test benches that find more bugs with less engineering investment. The answer lies in intelligent testbench automation. This type of automation generates optimal sets of tests to improve both productivity and effectiveness.

Through a unique, rule-driven test-generation technology, intelligent testbench automation delivers many benefits. For example, it cuts design respins by using rules to generate significantly more unique tests than are possible with conventional methods. In addition, intelligent testbench automation reduces testbench bugs by minimizing the amount of testbench code that engineers need to write. It also avoids the most time-consuming testbench edits through a testbench redirection facility. This facility allows engineers to change their simulation goals without altering their test implementation code.

Intelligent testbench automation also eliminates duplicate testing. It utilizes test-generation algorithms that use closed-loop adaptive coverage algorithms, which only produce unique tests. By providing dynamic design-resource allocation and facilitating layered testbench modules, this type of automation also retargets module-level testbenches at the system level without changing the testbench environment. Finally, intelligent testbench automation enables testbench-module reuse through functionality sub-setting and adaptive morphing technology.

Optimal tests FOr electrOnic DesignsFor the electronic-design-automation (EDA) industry, the need for better testbenches has resulted in considerable investment in testbench-related research and development. Such R&D has largely focused on improving the coding languages that engineers use to write their testbench programs. Language improvements have made a noticeable difference. Because design complexity has continued to increase, however, the verification problem has been simultaneously getting more difficult. Yet another new language would offer incremental improvements at best. An entirely new type of testbench based on rule sets is a more fundamental advance.

By Mark olen and Matt Ballancee

Essentially, a rule set describes how a high-level testing activity can be performed as a series of lower-level testing activities. A modest number of rules—taken together—can describe a very large set of tests. In fact, a well-chosen set of rules can compactly define how to run every test scenario that engineers might want to simulate during a digital design project—no matter how large or complex. A few pages of rules can easily define test sets containing thousands, millions, or even billions of tests. The result is a substantial savings in verification engineering cost.

Rule sets also make it possible to employ an entirely new class of algorithms. Intelligent testbench automation provides algorithms that generate optimal sets of tests specifically for engineers verifying electronic designs.

reDirecteD testbenches cut cODeWith rule-based algorithmic test generation, every simulation run has a purpose that’s captured in a verification goal. A verification goal makes references to relevant parts of a set of rules. Those rules specify a particular verification objective. For example, a verification goal might be to test all of the firmware instructions that are intended to execute conditionally. In doing so, one would verify that the instructions react correctly to each possible condition.

When a simulation starts, intelligent testbench generation initializes along with the simulator. It begins generating tests per the verification goal. As simulation proceeds, the algorithms monitor the progress toward the verification goal, and to avoid

take Verification to the next level with intelligent testbench automationWith intelligent testbench automation, it’s possible to bring verification

costs back in line while improving productivity.

Figure 1: Unlike traditional testbenches, algorithmic testbenches can be used across different design projects. They also can be retargeted at different levels of a design’s hierarchy and redirected to generate different tests based on the verification engineer’s goals for the next simulation run.

Verific

aTion

Methods-Tools

chip Design • www.chipdesignmag.com March 2008 • 19

wasting time on duplicate tests, they intelligently adapt their test-generation strategies. As a result, engineers don’t need to write constraints to avoid duplicate tests. The self-adapting algorithms automatically make continuous progress toward completion of the verification goal.

The verification goal enables an engineer to focus the testbench on producing the types of tests that are currently needed. This capability, which is known as testbench redirection, is important because a project team’s engineers need different tests at different times. Sometimes they’re investigating a problem report and they need a specific test sequence. Other times, they’re trying to give quick feedback on a design fix and they need to concentrate testing on a particular functional area. Or they could be running a regression test and need to thoroughly test a wide variety of functionality. Each time the project team wants different tests, it only needs to modify the verification goal to specify the type of tests that are needed. The rest of the testbench doesn’t need to be changed. The algorithms take care of all of the test-customization details. This aspect saves editing time while eliminating a significant source of testbench bugs.

scalable testbenchesIn addition to testbench redirection, testbench retargeting is a capability that’s often requested by testbench tool users. Testbench retargeting is important because complex designs need to be tested at the module, subsystem, and system levels. Ideally, a testbench module that’s used to drive a particular interface (like USB 2.0) during module testing would be retargeted without changes to drive that interface during subsystem- and system-level testing. In the real world, such retargeting rarely happens. Usually, subsystem- and system-level testing require completely new (or at best heavily edited) testbench code.

Retargeting a module-level testbench is problematic for two reasons. One barrier is the accessibility of the target module. As it’s incorporated into a subsystem, some or all of the target module’s pins recede from the periphery and become internal connections. The original module-level testbench is therefore rendered unusable. Intelligent testbench automation helps project teams solve this problem with layered testbenches.

The second barrier to retargeting a module-level testbench is the target module’s dependency on system-level resources during system testing. For example, some PCI transactions require a direct-memory-access (DMA) channel, which is a system-level resource. A PCI testbench module that’s testing a PCI design module in isolation can ignore DMA-channel availability issues. After all, it can run transactions that require a DMA channel anytime. In contrast, a PCI testbench module that’s testing a PCI design module in a system doesn’t have that luxury. During system

testing, the PCI testbench module must avoid DMA transactions when all of the system’s DMA channels are busy handling transactions that were initiated by other testbench modules.

Attempts to address system-resource problems by parameterizing testbench modules haven’t provided a general solution. This approach locks specific resources to particular testbench modules for the duration of a simulation. Real systems have more modules that can initiate DMA transactions than there are DMA channels available. As a result, locking schemes cannot be used without sacrificing large swathes of functional coverage.

To solve this problem, intelligent testbench automation provides a dynamic-resource allocation facility. Thus, a PCI testbench module can request a DMA channel for a single test. If one is available, the DMA channel is “checked out” to the PCI testbench module. The DMA transaction then runs and the DMA channel is “checked in” so another testbench module can use it. If no DMA channel is available, the PCI testbench can run non-DMA transactions and make good use of the available simulation time until such a channel is available. The combination of testbench layering and dynamic-resource allocation enables project teams to retarget testbench modules without changing them. In other words, the same testbench modules work for testing a module design, subsystem design, and a complete system design.

making testbench reuse a realityAfter a project team has invested in a testbench for a particular design, management often wants to reuse testbench modules on other design projects. For example, say a team develops a USB 2.0 testbench module for a cable-modem chip design. Ideally, it would be reusable on a subsequent Global Positioning Satellite (GPS) chip design. Unfortunately, attempts to reuse testbench code often result in substantially more editing and “tailoring” than expected. One common problem is that the first design has a different resource configuration (memory, DMA channels, interrupts, etc.) than the second design. Intelligent testbench automation solves this problem by editing the system resource list to match the second design. No changes to the testbench modules are needed, as the dynamic-resource allocation facility automatically makes the necessary testing adjustments.

Another common testbench-reuse problem is that different designs implement different subsets of the underlying specification. The original design, for example, may implement the full range of AMBA AHB functionality, including wrap transfers to optimize performance with the system’s cache. Yet the second design may implement an AMBA AHB subset without wrap transfers because there’s no cache in the system. Normally, such a change would require multiple edits to the AMBA testbench to remove or disable all wrap-transfer-related code. This task is problematic

Met

hods

-Too

ls

Ver

ific

aTio

n

20 • March 2008 chip Design • www.chipdesignmag.com

because the engineers on the second design team aren’t the ones who wrote the original testbench code. As a result, they cannot be sure which changes can be made safely.

With intelligent testbench automation, there’s a safe, standard way to test functionality subsets. Users can simply disable portions of the rule set to “knock out” unimplemented functionality, such as wrap transfers in the AMBA AHB example. No knowledge of testbench coding details is required.

A third common testbench-reuse problem is that two design teams may intentionally implement functionality from a common specification quite differently. This doesn’t mean that one of the design teams is wrong. On the contrary, it usually means that the two teams are addressing different end-use markets. Almost all modern specifications give design teams room to optimize their implementations for specific applications.

For instance, one design team may be optimizing its implementation for low cost (i.e., gate count), while another team may be optimizing its implementation for high speed. Both implementations are designed to conform to the same specification. Yet their interactions with the testbench at the detailed electrical level can be quite different. One AMBA AHB slave implementation may have a very small input buffer and frequently force “split” transactions. Another AMBA AHB slave implementation may have a very large input buffer and never force such transactions.

Engineers want to be able to verify both implementations with a single testbench. To maximize the potential for reuse, such a testbench would ideally be able to adapt itself to work with any legal implementation of the specification.

At the same time, it should be able to reject any illegal implementation. It’s very difficult for humans to write such a highly flexible testbench by hand while getting all of the details right for each of the many different possibilities.

Intelligent testbench automation solves this problem in two steps. First, it captures all of the different behavioral possibilities allowed by a specification using a non-deterministic rule-set description. Second, a patented “morphing” technology automatically adapts test generation to simultaneously conform to both the rule set and the dynamic responses of the design during simulation. Dynamic-resource allocation, functionality-subset specification, and morphing make it practical to reuse the resulting testbench modules on a wide variety of real-world designs. In addition, project teams aren’t required to incur the expense and risk of modifying unfamiliar testbench code.

The advantages of intelligent testbench automation are quite dramatic. The time needed for testbench programming is reduced while the overall coverage area is increased. It is reusable across multiple designs without requiring significant changes to the rule sets. Furthermore, algorithmic testbenches can be reused across module, sub-module, and full-system design levels. With testbench redirection capabilities, design teams can easily redirect test generation at various times during verification.

Ultimately, intelligent testbench automation finds more bugs more quickly while requiring less engineering time. It cuts down on the respins caused by the functional design errors that escape traditional functional verification. This technology is exactly what companies need if they’re to bring their verification costs in line with their design costs—especially as they face ever-growing and increasingly complex design challenges.

Mark Olen is a product manager at Mentor Graphics for Advanced Functional Verification Technologies. He has worked for Mentor for 10 years and has more than 25 years of experience in the semiconductor design verification and test industries. Olen is a graduate of Massachusetts Institute of Technology. He can be reached at [email protected].

Matthew Ballance is an engineer at Mentor Graphics for the inFact product. He holds a BSCpE from Oregon State University and has worked in the areas of HW/SW co-verification and transaction-level modeling. Ballance can be reached at [email protected].

Figure 2: By separating the functional specification from design implementation, a testbench module becomes highly reusable across multiple implementations of the same specification.


Recommended