CONFIDENTIAL 1
Biased Random Simulation Guided by Observability-Based Coverage
Biased Random Simulation Guided by Observability-Based Coverage
Serdar Tasiran Compaq Systems Research Center, formerly GSRC, UC Berkeley
Farzan Fallah Fujitsu Labs of America
David G. Chinnery, Scott K. Weber,
Kurt Keutzer UC Berkeley
Serdar Tasiran Compaq Systems Research Center, formerly GSRC, UC Berkeley
Farzan Fallah Fujitsu Labs of America
David G. Chinnery, Scott K. Weber,
Kurt Keutzer UC Berkeley
2
Simulation-based Functional ValidationSimulation-based Functional ValidationSimulation-based Functional ValidationSimulation-based Functional Validation
SimulationSimulation
Input stimulus
generationDesign (RTL model)
Reference model Monitors, assertions,
comparison w/ ref. model
Functional Functional ValidationValidation
3
Simulation with Coverage FeedbackSimulation with Coverage FeedbackSimulation with Coverage FeedbackSimulation with Coverage Feedback
Input stimulus
generation
Coveragemeasurement and analysis
Diagnosis ofunverifiedportions
SimulationSimulation
Design (RTL model)
Reference model Monitors, assertions,
comparison w/ ref. model
Functional Functional ValidationValidation
4
Our WorkOur WorkOur WorkOur Work
SimulationSimulation
Input stimulus
generation
Coveragemeasurement and analysis
Diagnosis ofunverifiedportions
Design (RTL model)
Monitors, assertions,
comparison w/ ref. model
Functional Functional ValidationValidation
Reference model
5
Our workOur workOur workOur work
SimulationSimulationInput stimulus
generation
Coverage measurement and analysis
Design (RTL model)
SimulationSimulation
Biased-random
input generation
22
Tag coverage analysis
(Observability-based coverage)
11
OutlineOutline
Diagnosis ofunverifiedportions
Compute new biases to target
non-covered tags
33
6
OutlineOutlineOutlineOutline
SimulationSimulationBiased-random
input generation
Coverage measurement and analysis
Compute new biases to target
non-covered tags
Design under test
SimulationSimulation
Tag coverage analysis
(Observability-based coverage)
11
7
ObservabilityObservabilityObservabilityObservability Simulation detects a bug only if
– a monitor flags an error, or
– design and reference model differ on a variable
Variables checked for functional correctness called observed variables.
Portion of design covered only when
1. it is exercised during simulation (controllability)
2. a discrepancy originating in that portion causes discrepancy in an observed variable (observability)
Low observability false sense of security
– Most of the design is exercised Looks like high coverage
– But most bugs not detected by monitors or reference model– They would have been detected if inputs chosen properly
SimulationSimulation
Design (RTL model)
Reference model Monitors, assertions,
comparison w/ ref. model
Functional Functional ValidationValidation
8
Tag Coverage Tag Coverage [Devadas, Keutzer, Ghosh ‘96][Devadas, Keutzer, Ghosh ‘96]
HDL code coverage metrics + observability requirement.
Bugs modeled as errors in HDL assignments.
A buggy assignment may be stimulated, but still missed
EXAMPLES:– Wrong value generated speculatively, but never used.
– Wrong value is computed and stored in memory Read 1M cycles later, but simulation doesn’t run that long.
9
Tag Coverage Tag Coverage [Devadas, Keutzer, Ghosh ‘96][Devadas, Keutzer, Ghosh ‘96]
Generalization of “stuck-at” fault coverage to HDL code Handles multi-valued variables
Error model: An HDL assignment computes a value– higher (+or
– lower (-
than the intended value.
– Example: 3 + represents values > 3
A = 3C = F - 2AD = K * C
A+ = 3
+
AA
FF
CC
DD
KK
+ -
- ???
10
Tag CoverageTag Coverage
Run simulation vectors Tag one variable assignment at a time Use tag calculus
Confirms that – HDL line is activated and
– its effect is propagated to an observable variable
Tag Coverage: Subset of tags that propagate to observed variables
Efficient tool: OCCOM [Fallah, et. al.]
A = 3C = F - 2AD = K * C
+
+ -
- ???
11
OutlineOutlineOutlineOutline
SimulationSimulation
Coverage measurement and analysis
Design under test
SimulationSimulation
Tag coverage analysis
(Observability-based coverage)
Diagnosis ofunverifiedportions
Compute new biases to target
non-covered tags
Biased-random
input generation
Biased-random
input generation
22
12
Rationale for Biased-Random Vector GenerationRationale for Biased-Random Vector GenerationRationale for Biased-Random Vector GenerationRationale for Biased-Random Vector Generation
Primary inputs selected according to a probability distribution
Trade-off between– Time to find “good” vectors– Time to simulate vectors
Typically > 50% of simulation is biased random simulation
Improved random vectors better validation overall Less intelligence for selecting next step but many more vectors
– Can explore deeper into state space
Find Simulate
0% 100%Portion of Computation Time
13
Primary inputs at each clock cycle selected according to a probability distribution
– Distributions can be functions of circuit state
Distributions ( “weights” ) determined prior to simulation
i1
i2
i3
s1
s2
P(i1 = 0) = 0.7 P(i1 = 1) = 0.3
P(i2 = 0 | s1 = 1) = 0.6 P(i2 = 1 | s1 = 0) = 0.7
Probability Distributions (“Weights”)
P(i3 = 0) = 0.4 P(i3 = 1) = 0.6
Biased Random Vector GenerationBiased Random Vector GenerationBiased Random Vector GenerationBiased Random Vector Generation
14
Very unlikely to exercise certain cases with uniform random simulation
Why Optimize Biases?Why Optimize Biases?Why Optimize Biases?Why Optimize Biases?
i1
i2
i32
.
.
.
oP(i1 = 1) = P(i2 = 1) = … = P(i32 = 1) = 0.5
P(o = 1) = 2-32
Wunderlich [DAC ‘85, Int’l. Test Conf. ‘88]– Even for combinational circuits
several sets of biases required for good fault coverage Biases must be picked based on targeted tags.
15
OutlineOutlineOutlineOutline
SimulationSimulation
Coverage measurement and analysis
Design under test
SimulationSimulation
Tag coverage analysis
(Observability-based coverage)
Diagnosis ofunverifiedportions
Compute new biases to target
non-covered tags
Compute new biases to target
non-covered tags
33
Biased-random
input generation
16
Optimization algorithm determines biases based on– Set of tags targeted– A structural netlist describing the circuit (BLIF-MV)
Intuitive goal– Maximize expected number of tags that will be covered
COVER(Circuit,Tags)
repeat
while (tag coverage rate) > (threshold)
Biases = Optimize_Input_Biases(Circuit, Tags)
Biased_Random_Simulate(Circuit, Biases),
Tags = Tags - Tags_Covered
Optimizing Input BiasesOptimizing Input BiasesOptimizing Input BiasesOptimizing Input Biases
17
Modeling Biased Random SimulationModeling Biased Random SimulationModeling Biased Random SimulationModeling Biased Random Simulation Key subroutine for optimizing input biases:
Estimate coverage for given primary input distributions. Transition probabilities fixed at each state
Model circuit + random generation of inputs as Markov chain.
Long simulation runs Analyze behavior of circuit at steady state.
- Determine tag detection probability at steady state Huge state space Approximate analysis
s0
s1
s2 s3
s4
i=0
i=1
i=0
i=1
i=1
i=0 i=0
i=1
P(s0) = 0P(s1) = 0P(s2) = 0.25P(s3) = 0.25P(s4) = 0.5
P(i=1) = 0.2
18
Approximation I: Line ProbabilitiesApproximation I: Line ProbabilitiesApproximation I: Line ProbabilitiesApproximation I: Line ProbabilitiesCompute probability distributions of state variables instead of states
Prob((s1,s2) = (a1,a2)) = Prob(s1=a1) x Prob(s2=a2)
– Ignores correlations between latches
– Devadas, et. al. [VLSI ’95] Power estimates within 3% for benchmarks Individual node distributions correct within 15%
Refinement: Group closely correlated state variables into a single variable.
s1
s2
(s1,s2) = (a1, a2)
(s’1,s’2) = (a’1, a’2)
19
Steady-StateSteady-StateSteady-StateSteady-State
Fixed-point
prob(ns1 = v1) = prob( f1(i1, i2, … , im, ps1, ps2, … , psn) = vi) = prob(ps1 = v1)
…prob(nsi
= vj) = prob( fn(i1, i2, … , im, ps1, ps2, … , psn) = vj) = prob(psi = vi)
s1
s2
P(ps1=v1)
P(ps2=v2)
P(ns1=v1)
P(ns2=v2)
Given input probability distributions
20
Computing Latch PDs at Steady-StateComputing Latch PDs at Steady-StateComputing Latch PDs at Steady-StateComputing Latch PDs at Steady-State Start with initial guess for latch PDs
Given PDs for inputs and latch outputs, compute PDs at latch inputs
– Substitute new distributions at latch outputs
Repeat until convergence– Guaranteed under minor restrictions
Key computation: Propagating PDs– Given PDs at the inputs of a combinational circuit,
determine PDs at each node.
s1
s2
P(ps1=v1)
P(ps2=v2)
P(ns1=v1)
P(ns2=v2)
Given input probability distributions
21
Given probability distributions (PDs) at inputs and latch outputscompute PDs of circuit nodes
Propagating probability distributionsPropagating probability distributionsPropagating probability distributionsPropagating probability distributions
0 1
i0
i1 i1 i1
i2 i2
P(i 0 = 0)
P(i0 = 2)
Represent each node as a function of primary inputs
Inputs assumed independent
Use recursive algorithm on MDD to compute node PDs
22
While propagating probabilities forward, impose limit on MDD size. When limit reached, treat intermediate node as primary input
Correlations outside clusters ignored.
Approximation II: ClusteringApproximation II: ClusteringApproximation II: ClusteringApproximation II: Clustering
0 1
i0
i1 i1 i1
n1 n1
P(i 0 = 0)
P(i0 = 2)
n1
23
Estimating Tag DetectabilityEstimating Tag DetectabilityEstimating Tag DetectabilityEstimating Tag Detectability Given PDs of each circuit node, estimate controllability
and observability of tags
Recall : – Actual value of node xi is q
– Intended value is p– q > p
Controllability: Probability that xi = p at steady state (Done)
Observability: Probability that xi = q causes change in observed variable.
– Function of PDs of other nodes– May happen along a multi-cycle path
pq
rs
xi
f
yj
24
Observability ComputationObservability ComputationObservability ComputationObservability ComputationObservability: Probability that xi = p q causes change in observed variable.
Propagate observabilities backward, starting from observed variables
Determine observability of xi based on
observability of yj. PDs of related circuit nodes
Form MDD representing condition for xi = p q to be observed Compute probability that MDD evaluates to 1. Many paths: Pick best cluster (yj)
pq
rs
xi
f
yj
25
Observability PropagationObservability PropagationObservability PropagationObservability Propagation Propagate observabilities backward, starting from observed variables Discrepancies may take multiple cycles to reach observed variable.
Perform several backward passes of observability computation– Stop at e.g. 10 passes. Analysis too inaccurate for more passes.
26
Optimization CriteriaOptimization CriteriaOptimization CriteriaOptimization Criteria
merit cost
Intuitive goal of weight determination algorithm:– Maximize expected number of tags that biased random simulation
will cover Use merit and cost functions to formalize goal
For a single tag (latch):
Add merit (cost) functions for all targeted tags (latches)
Tag detectionprobability
Deviation of latchdistribution fromuniform
27
Optimizing Input DistributionsOptimizing Input DistributionsOptimizing Input DistributionsOptimizing Input Distributions
repeat
For each primary input i
Select a set of probability distributions p0, p1, p2, …, pn
For j=0,…,n
Compute merit functions for P(i=1) = pj
Pick the j that yields best merit function
until no improvement in merit function
28
40
50
60
70
80
90
100
0 50000 100000 150000 200000
Simulation cycles
% T
ag
Co
vera
ge
Experimental Results: s1423Experimental Results: s1423
Uniform biasesUniform biases
Optimized biases
29
s1423 – early in simulations1423 – early in simulation
30
40
50
60
70
80
90
0 5000 10000 15000
Simulation cycles
% T
ag c
ove
rag
e
Uniform biasesUniform biases
Optimized biases
30
55
60
65
70
75
80
85
0 12000 24000
Simulation cycles
% T
ag
co
vera
ge
Experimental Results: s5378Experimental Results: s5378
Second round of bias optimization starts
Uniform biasesUniform biases
Optimized biases
21231801876762562dlx
3205104100101115136100121452s38584
45375541259998276181938085
469135337014821774s1423
410012100613504335164s5378
453429442821584028sbc
12813.51010108035164s1238
11211111221774s1196
12624211184112591332914597s15850
27093
Merit fn. comp. CPU
time/ iteration (s)
Tags not covered
31 1256025729450669s13207
Biasedrandom
Uniformrandom
Circuit # la
tche
s
# in
puts
# ta
gs
# ite
ratio
ns
Mem
ory
(MB
)
32
ConclusionsConclusionsConclusionsConclusions On some examples, significant improvement in coverage with reasonable
computational cost– No manual effort required– Longer uniform random simulations do not achieve same result
Coverage feedback is a powerful tool in input vector generation.
On other examples, coverage not improved. Hundreds of simulations with different biases show no improvement Circuit size is not t limiting factor:
– Good results on some large circuits, bad results on some small ones– Close to complete coverage on large combinational benchmarks
For examples with bad coverage, most latches show no activity.Conjecture: Ignoring input constraints and initialization sequences
cause circuits not to be driven properly
33
ConclusionsConclusionsConclusionsConclusions
Biased random patterns do not provide enough controlon the simulation runfor some circuits.
Not a standalone technique. Must be used in conjunction with more powerful, deterministic methods.
– ATPG, approximate reachability, …
Better biased random simulation complements other approaches
34
Future research directionsFuture research directionsFuture research directionsFuture research directions
Current method limited to multi-valued variables with small ranges Generalize method to larger datapaths.
Handle datapath-control interaction
Experiment with higher level RTL descriptions. E.g., explore biased random simulation at instruction level.
Pure biased random simulation is too weak Explore choice of state-dependent biases
Aid bias selection in randomized test programs