+ All Categories

KIM

Date post: 20-Oct-2015
Category:
Upload: pratibha-chauhan
View: 25 times
Download: 4 times
Share this document with a friend
Popular Tags:
102
UNIVERSITY OF CALIFORNIA Los Angeles Error Reduction Techniques in Geometric Programming based Mixed-Mode Circuit Design Optimization A thesis submitted in partial satisfaction of the requirements for the degree Master of Science in Electrical Engineering by Jintae Kim 2004
Transcript
Page 1: KIM

UNIVERSITY OF CALIFORNIA

Los Angeles

Error Reduction Techniques in Geometric Programming based

Mixed-Mode Circuit Design Optimization

A thesis submitted in partial satisfaction

of the requirements for the degree

Master of Science in Electrical Engineering

by

Jintae Kim

2004

Page 2: KIM

© Copyright by

Jintae Kim

2004

Page 3: KIM

ii

The thesis of Jintae Kim is approved.

Lieven Vandenberghe

Behzad Razavi

Chih-Kong Ken Yang, Committee Chair

University of California, Los Angeles

2004

Page 4: KIM

iii

To my family

Page 5: KIM

iv

TABLE OF CONTENTS

CHAPTER 1 Introduction ........................................................................................... 1

1.1 Geometric programming overview ..................................................................... 3

1.1.1 Convex set, convex function, and convex optimization.................................. 3

1.1.2 Geometric programming ................................................................................. 6

1.1.3 Geometric programming in a convex form..................................................... 7

1.1.4 Convexity of the posynomial after the transformation.................................. 10

1.1.5 Generalized geometric programming............................................................ 12

1.2 Previous Work ................................................................................................... 15

1.3 Organization ...................................................................................................... 16

CHAPTER 2 Sources of Error in GP-based Design................................................ 18

2.1 Modeling error................................................................................................... 18

2.2 Bias estimation error ......................................................................................... 21

2.3 Summary ........................................................................................................... 23

CHAPTER 3 Device Modeling .................................................................................. 24

3.1 GP-compatible device model ............................................................................ 24

Page 6: KIM

v

3.2 Review of published approaches....................................................................... 27

3.2.1 Monomial models from physical law............................................................ 28

3.2.2 Monomial models via numerical fitting ........................................................ 29

3.2.3 Posynomial models from fitting.................................................................... 30

3.3 Piecewise-Linear (PWL) model ........................................................................ 31

3.3.1 Principle and method..................................................................................... 31

3.3.2 Reduced complexity algorithm ..................................................................... 35

3.3.3 Example......................................................................................................... 37

3.4 Outlier classification ......................................................................................... 39

3.4.1 Tradeoff between fitting error and variable space......................................... 40

3.4.2 Domain and outlier........................................................................................ 42

3.4.3 Outlier classification ..................................................................................... 44

3.4.4 The metric for variable space reduction........................................................ 47

3.4.5 Example of outlier classification................................................................... 48

3.5 Summary ........................................................................................................... 50

CHAPTER 4 GP Description Technique.................................................................. 51

Page 7: KIM

vi

4.1 Bias description as posynomial equality constraints......................................... 51

4.2 GP description style robust to the modeling error............................................. 55

4.3 Summary ........................................................................................................... 57

CHAPTER 5 Design Applications............................................................................. 59

5.1 Two stage OP-AMP design............................................................................... 59

5.1.1 Symmetry and matching................................................................................ 60

5.1.2 Biasing and circuit topology ......................................................................... 61

5.1.3 Output swing ................................................................................................. 63

5.1.4 Quiescent power............................................................................................ 64

5.1.5 Gain specification.......................................................................................... 64

5.1.6 Phase margin and unity-gain bandwidth ....................................................... 65

5.1.7 Model definition............................................................................................ 68

5.1.8 Optimization result ........................................................................................ 69

5.2 Integrated inductor design................................................................................. 72

5.3 Summary ........................................................................................................... 74

CHAPTER 6 Conclusion............................................................................................ 75

Page 8: KIM

vii

6.1 Contributions..................................................................................................... 75

6.2 Future work ....................................................................................................... 76

6.2.1 Device-level optimization ............................................................................. 76

6.2.2 Predicting the performance in future technology.......................................... 77

APPENDIX A Norm approximation in function fitting .......................................... 79

Page 9: KIM

viii

LIST OF FIGURES

Figure 1.1: Examples of convex set and non-convex set: (a) is convex set, (b) is

nonconvex set because line segmenet lie outside of the set............................ 4

Figure 1.2: Example of a convex function......................................................................... 4

Figure 1.3: Posynomial as a smooth approximation of piece-wise linear function......... 13

Figure 2.1: Two stage op-amp with PMOS input differential stage ................................ 19

Figure 3.1: The idea of PWL function fitting to the convex function ............................. 32

Figure 3.2: Illustration of reduced complexity method: (a) given data set, (b) PWL

function with all planes, (c) PWL function with fewer planes...................... 34

Figure 3.3: Reduced complexity PWL fitting algorithm ................................................. 36

Figure 3.4: Fitting errors vs # of planes in table 3.2........................................................ 38

Figure 3.5: Examples of data versus error. Filled circles represent the data we trust and

the empty ones are outliers: (a) the case we can discriminate outliers, (b) the

case we cannot discriminate outliers............................................................. 44

Figure 3.6: Minimum volume ellipsoid that contains a finite convex set........................ 47

Figure 3.7: Model refinement via outlier classification................................................... 49

Figure 3.8: Application example on VOV monomial model ............................................. 49

Page 10: KIM

ix

Figure 4.1: Common-source amplifier with PMOS current-source load......................... 52

Figure 5.1: Optimal GB product from GP and SPICE..................................................... 71

Figure 5.2: D.C. gain from GP and SPICE ...................................................................... 71

Figure 5.3: Quality factor vs Inductance in GP and ASITIC........................................... 73

Page 11: KIM

x

LIST OF TABLES

Table 2.1: Max/mean percentage errors in monomial device model ............................... 20

Table 2.2: GP prediction and SPICE simulation discrepancy.......................................... 22

Table 3.1: GP deivce model parameters for MOS transistor ........................................... 27

Table 3.2: Max/mean percentage errors in design parameter models for TSMC 0.18-um

saturated NMOS device ................................................................................ 37

Table 3.3: Max/mean percentage errors in on-chip inductor model for TSMC 0.13-um

technology ..................................................................................................... 37

Table 3.4: Binning map for TSMC 0.18um BSIM3v3 model ......................................... 40

Table 3.5 Monomial modeling for VOV in TSMC 0.18-um technology (Outliers represent

the data point whose modeling error is beyond 10%)................................... 41

Table 5.1: GP predictions and corresponding SPICE simulations for both PWL and

monomial based optimization. ...................................................................... 70

Page 12: KIM

xi

ACKNOWLEDGEMENTS

I would like to express my heartfelt gratitude to all the people I have been indebted to.

Having been surrounded by great teachers and fellow students, it was my privilege to be a

part of IC&S field at UCLA during the last two years.

First, I’d like to thank Professor Chih-Kong Ken Yang for his supervision and edits on

this thesis. He has been an infinite source of knowledge and idea. I truly learned a great

deal throughout the association with him from both classes and discussions.

I also want to thank Professor Lieven Vandenberghe for the numerous discussions I

had with him. His proper advice always led my research to the right direction whenever I

was lost. I also thank him for being on the committee of this thesis.

I was able to learn a lot about the analog circuit design through my early courses

taught by Professor Bezhad Razavi. I’d like to thank him for that and for being on the

committee as well.

I also want to thank Prof. Greg Pottie for the support and generosity he showed

during my brief stay in his group.

Page 13: KIM

xii

I’m also indebted to Mar Hershenson, Sunderarajan Mohan, Igor Kounzenti and

Arash Hassibi for invaluable advice during my stay at Barcelona design as an intern.

They’ve paved the initial work that this thesis is built upon and I definitely learned a lot

from working with them.

I owe numerous thanks to fellow circuiteers in 5th floor cubicle areas. Especially, I

would like to thanks all the current and former CKY group members.

I was fortunate to be a TA for several undergraduate circuit courses. The interactions

with undergraduate students in my TA recitations always made me reexamine my

understanding of basic yet important circuit theory, for that I wish to thank them.

My sincere thanks go to my dear Korean friends in both Korea and United States.

Although I cannot name them all, I am grateful for the support and encouragement of

them.

Last but not the least, the group of people who deserve the most credit is no doubt

all my family members. Without their love and support, this work has never been possible.

Page 14: KIM

xiii

VITA

1975 Born, Seoul, Korea

1997 B.S. in Electrical Engineering

Seoul National University, Seoul, Korea

1997-2001 Digital IC Design Engineer, Xeline Inc, Seoul, Korea

2001-2003 Teaching Assistant, Electrical Engineering Dept., UCLA

2002-2004 Research Assistant, CKY Group, Electrical Engineering Dept., UCLA

7-9/2003 Intern, Barcelona Design Inc, Newark, California

Page 15: KIM

xiv

Abstract of the Thesis

Error Reduction Techniques in Geometric-Programming Based

Mixed-Mode Circuit Optimization

by

Jintae Kim

Master of Science in Electrical Engineering

University of California, Los Angeles, 2004

Professor Chih-Kong Ken Yang, Chair

Recently it has been shown that various mixed-mode circuit design problems can be

optimized with great efficiency via geometric programming, which is a special type of

convex optimization problem. However, previous literatures have not completely

described what the limiting factors in this approach are. In particular, the origins of

discrepancy between the result obtained from geometric programming optimization and

traditional circuit simulation have not been identified clearly.

Page 16: KIM

xv

The thesis begins with a brief overview of geometric programming, followed by a

complete description of the error sources between geometric programming based

optimization and traditional circuit simulation. We then suggest error reduction methods

from two different points of view. First, device modeling based on piece-wise linear

function fitting is introduced to create accurate active and passive device models. Also,

device model refinement through outlier classification is presented. Second, several GP

description techniques that help formulate the circuit design more rigorously are provided.

These methods are applied to the design of two-stage opamp and on-chip inductor.

Page 17: KIM

1

CHAPTER 1 Introduction

The recent trend in circuit design is an increasing integration of analog and digital

functions. According to the market research firm IBS Group (www.ibsresearch.com), the

mixed-signal system-on-chip market is expected to grow at a very high rate, on the order

of 40 percent in the next five years. In 2005, it is expected that 75 percent of commercial

chips will contain some analog and mixed-mode circuitry.

In order to integrate analog and digital functions while meeting time-to-market and

cost-effectiveness, chip design relies increasingly on computer-aided design automation.

As widely known, digital design community has well-developed suites of design

automation tools that have helped designers significantly enhance productivity;

meanwhile design tools for analog and mixed-signal circuits still lag. To bridge the gap,

there has been extensive research in computer-aided design of analog and mixed-signal

circuits such as [1] and [3].

Analog design synthesis has been a particularly elusive goal. Among many

techniques that have been explored, analog design synthesis based on geometric

Page 18: KIM

2

programming (GP) has attracted considerable attention by proving its viability in

optimizing CMOS op-amps [8], switched-capacitor filters [7], pipelined ADC [9], phase

locked-loop [4] , and CMOS DC-DC buck converter design [14].

While successful in demonstrating their viability, the publications have not

extensively identified or addressed the limitation of this method. In particular, the

reliability of this method depends on the accuracy of the prediction from GP optimization.

In this research work we identify the sources of error in GP and investigate several

methods to reduce the error.

The remainder of this chapter is organized as follows. Section 1.1 provides an

overview of geometric programming which is the background of this work. Section 1.2

reviews previous art that used geometric programming in the mixed-mode circuit design.

Finally, the overall roadmap of the thesis is presented in section 1.3.

Page 19: KIM

3

1.1 Geometric programming overview

Geometric programming is a special type of optimization problem that can be solved as

convex optimization problem. In this section, we give a brief overview of convex

optimization and geometric programming.

1.1.1 Convex set, convex function, and convex optimization

A set nC ⊆ R is convex if the line segment between any two points in C lies in C , i.e,

1 2 1 2, , 0 1 (1 )x x C x x Cθ θ θ∈ ≤ ≤ ⇒ + − ∈ (1.1)

For example, the set (a) in Figure 1.1 is convex set whereas the set (b) is nonconvex

because the line segment lies outside of the set.

A function : nf →R R is convex if domain of f, fdom , is a convex set and f

satisfies

( ) ( ) ( )f x y f x f yα β α β+ ≤ + (1.2)

for all , nx y ∈ R and all ,α β ∈ R with 1, 0, 0α β α β+ = ≥ ≥ . Geometrically, inequality

(1.2) means that the line segment between ( , ( ))x f x and ( ( , ( ))y f y lies above the graph

Page 20: KIM

4

Figure 1.1: Examples of convex set and non-convex set: (a) is convex set, (b) is

nonconvex set because line segmenet lie outside of the set

( , ( ))x f x

( , ( ))y f y

Figure 1.2: Example of a convex function.

of f as shown in Figure 1.2.

A convex optimization problem is special type of mathematical optimization problem

that has the following form.

Page 21: KIM

5

0minimize ( )

subject to ( ) , 1, . . . ,

, 1, . . . ,

i i

T ii

f x

f x b i m

a x b i p

≤ =

= =

(1.3)

where the functions 0 ,..., : nmf f →R R are convex functions. Compared with general

optimization problem, the convex problem has following additional requirements:

The objective function must be convex,

The inequality constraint functions must be convex,

The equality constraint T iia x b= must be linear.

We note an important property: the feasible set of a convex optimization problem is

convex, since it is the intersection of the domain of the problem.

0dom ,

mi

iD f

== ∩

Thus, in a convex optimization problem, we minimize a convex objective function over a

convex set and therefore any local optimal point is simply globally optimal. This is a big

advantage over general nonlinear-nonconvex optimization problems that can potentially

have numerous local optima, often needing computationally-intensive heuristics to

prove global optimality.

Page 22: KIM

6

1.1.2 Geometric programming

Geometric Programming (GP) is an optimization problem that has the following format

0minimize ( )

subject to ( ) 1, 1, . . . ,

( ) 1, 1, . . . ,

0, 1, . . . ,

i

i

i

f x

f x i m

h x i p

x i n

≤ =

= =

> =

(1.4)

where 0, . . . , mf f are posynomial functions and 1, . . . , ph h are monomial functions. We

refer the geometric programming of form (1.4) as standard form geometric

programming.

Posynomial function, or simply a posynomial, is defined as

1 21 2

1( ) k k nk

K a a ak n

kf x c x x x

== ⋅ ⋅ ⋅∑ (1.5)

where 0kc > and ia ∈ R . The exponents ia of posynomial can be any real numbers,

but the coefficient c must be nonnegative. Posynomials are closed under addition,

multiplication, and nonnegative scaling. When the posynomial has a single term in the

summation, we call it a monomial. In other words,

1 21 2( ) naa a

nh x cx x x= ⋅ ⋅ ⋅ (1.6)

is a monomial. Note that monomials are closed under division and posynomials are not.

Page 23: KIM

7

Following is the typical example of geometric program.

2

2

maximize /

subject to 2 3

3 /

/ ,

x y

x

x y z y

x y z

≤ ≤

+ ≤

=

The variables are , ,x y z ∈ R (and the implicit constraint , , 0x y z > ). Using the simple

transformations, we obtain the equivalent standard form GP problem as follows

1

1

2 1/ 2 1/ 2 1

1 2

minimize

subject to 2 1, (1/ 3) 1

3 1

1

x y

x x

x y y z

xy z

− −

− −

≤ ≤

+ ≤

=

Clearly, the problem has posynomial inequality constraints and object function as well as

monomial equality constraints and thus can be called a GP.

1.1.3 Geometric programming in a convex form

Geometric programs are not in general convex optimization problems, but they are

transformed to convex optimization problems by change of variable and a transformation

of objective and constraint function. We begin with change of variables logi iy x= , or

Page 24: KIM

8

equivalently iyix e= . If f is the monomial function of x given in (1.6), i.e.,

1 21 2( ) naa a

nf x cx x x= ⋅ ⋅ ⋅ ,

then

1

1 1

( ) ( , . . . , )

( ) ( )

,

n

n n

T

y y

y a y a

a y b

f x f e e

c e e

e +

=

= ⋅ ⋅ ⋅

=

where logb c= . Note that the change of variables logi iy x= turns a monomial function

into the exponential of an linear function of y.

Similarly, if f is the posynomial given by (1.5), i.e.,

1 21 2

1( ) ,k k nk

K a a ak n

kf x c x x x

== ⋅ ⋅ ⋅∑

then

1

( ) ,Tkk

Ka y b

kf x e +

== ∑

where 1( , . . . , )k k nka a a= and logk kb c= . After the change of variables, a posynomial

becomes a sum of exponentials of linear functions.

Now, the geometric program (1.4) can be expressed in terms of the new

variable y as

0 001

1

minimize

subject to 1, 1, . . . ,

1, 1, . . . , ,

Tkk

Ti i ki k

Tii

K a y bk

K a y bk

g y h

e

e i m

e i p

+=

+=

+

≤ =∑

= =

Page 25: KIM

9

where , 0, . . . , ,nika i m∈ =R are the exponents of the posynomial inequality constraints,

and , 1, . . . , ,nig i p∈ =R are the exponents of the monomial equality constraints of the

original geometric program. Next step is to transform the objective and constraint

functions by taking the logarithm. This results in the following problem.

0 000 1

1

minimize ( ) log( )

subject to ( ) log( ) 0, 1, . . . ,

( ) 0, 1, . . . , .

Tkk

Ti i ki k

K a y bk

K a y bi k

Ti ii

f y e

f y e i m

h y g y h i p

+=

+=

= ∑

= ≤ =∑

= + = =

(1.7)

Since functions ( )if y are convex (will be proved in section 1.1.4), and ( )ih y are

linear, this problem is a convex optimization problem. We refer to (1.7) as a geometric

program in convex form. To distinguish it from the original geometric program, we

sometimes refer to (1.4) as a geometric program in posynomial form. Because the

transformation between posynomial form and convex form does not involve any

computation, the problem data for the two problems are the same. It simply changes the

form of the objective and constraint functions.

If posynomial objective and constraint functions all have only one term, i.e., are

monomials, then the convex form geometric program (1.7) reduces to a general linear

program. We can therefore consider geometric programming to be a generalization of

Page 26: KIM

10

linear programming.

1.1.4 Convexity of the posynomial after the transformation

The convexity of function can be shown by verifying the inequality condition (1.2) or

by checking the first or second-order conditions of convex function, i.e.,

( ) ( ) ( ) ( ) for all , Tf y f x f x y x x y f≥ + ∇ − ∈ dom (1.8)

or

2 ( ) 0,f x x f∇ ∈ dom . (1.9)

For example, any linear functions are convex because their second derivative is zero,

therefore satisfying (1.9).

To prove the convexity of the posynomial after the transformation, we first note that

the function ( )if y in (1.7) is in fact log-sum-exp function 1( ) log( )nx xf x e e= + ⋅ ⋅ ⋅ + .

To prove the convexity of log-sum-exp function, we note that the Hessian of this function

is

( )22

1( ) ( ) ( ) ,( )

T TT

f x z z zzz

∇ = −1 diag1

where 1( , . . . , )nx xz e e= . To verify 2 ( ) 0f x∇ we must show that for all v,

Page 27: KIM

11

2 ( ) 0Tv f x v∇ ≥ , i.e.,

2

222 1 1 1

1( ) 0.( )

n n nT i i i iiT i i i

v f x v v z z v zz = = =

∇ = − ≥∑ ∑ ∑ 1

But this follows from the Cauchy-Schwarz inequality 2( )( ) ( )T T Ta a b b a b≥ applied to the

vectors with components ,i i i i ia v z b z= = .

We make another interesting and insightful interpretation. The log-sum-exp function

can be interpreted as a differentiable approximation of the max function, since

11 1max{ , . . . , } log( ) max{ , . . . , } lognx xn nx x e e x x n≤ + ⋅ ⋅ ⋅ + ≤ + (1.10)

is valid for all x. From (1.7), we note that ( )1 1 2 2( ) logT T T

K Ka x b a x b a x bif x e e e+ + += + +⋅ ⋅ ⋅ +

which then satisfies

1 1 1 1max{ , . . . , } ( ) max{ , . . . , } logT T T TK K i K Ka x b a x b f x a x b a x b K+ + ≤ ≤ + + + . (1.11)

Interestingly, both the lower and upper bound in (1.11) are convex functions as

well. It is shown as follows. First, by using the definition in (1.2), one can easily show

that if 1, . . .. , mf f are convex, then their pointwise maximum

1( ) max{ ( ), . . . , ( )}mf x f x f x=

is also convex function. Second, the function

1( ) max{ , . . . , }T TK Kf x a x b a x b= + + (1.12)

Page 28: KIM

12

defines a piecewise-linear function with K regions. It is convex since it is the pointwise

maximum of linear functions and linear functions are convex. In (1.11), the lower bound

is exactly piecewise-linear function and upper bound is pieceweise-linear function shifted

by constant amount, thus both of bounds are convex functions.

What we can infer from this observation is two-fold. First, posynomial (after the

transformation) is a smooth and differentiable approximation of piecewise-linear function

(Figure 1.3). Second, pieceweise-linear function can be used as a fitting template for

posynomial and in fact any convex function. The second observation will be the

fundamental basis of the suggested device modeling method in chapter 3.

1.1.5 Generalized geometric programming

In addition to the standard GP, more relaxed versions of optimization problems can be

transformed into standard form GP by introducing dummy (or slack) variables, which

further extends the application of GP. Generalized geometric programming is one of them.

Page 29: KIM

13

Figure 1.3: Posynomial as a smooth approximation of piece-wise linear function

A generalized posynomial is a function of the form

1( ) ( ( ), . . . , ( ))kh x f x f xφ=

where : kφ →R R and : nkf →R R are posynomials, and all the exponents of φ are

nonnegative. For example, suppose 0.3 1.2 0.51 2 11 2 2( , ) 2z z z z z zφ = + and 1f and 2f are

posynomials. Then the function

0.3 1.2 0.51 2 1 2 1 2( ) ( ( ), ( )) 2 ( ) ( ) ( ) ( ) 2h x f x f x f x f x f x f xφ= = + +

is a generalized posynomial. Note that h is not a posynomial (unless f1 and f2 are

Page 30: KIM

14

monomials or constants).

A generalized geometric program is an optimization problem of the form

0minimize ( )

subject to ( ) 1, 1, . . . ,

( ) 1, 1, . . . , ,

i

i

h x

h x i m

g x i p

≤ =

= =

(1.13)

where 1, . . . , pg g are monomials, and 0 , . . . , mh h are generalized posynomials.

(1.13) can be expressed as an equivalent standard form GP by introducing slack variables

as follows. By definition of generalized posynomial, a generalized GP has the form

00 01 0

1

minimize ( ( ), . . . , , ( ))

subject to ( ( ), . . . , , ( )) 1, 1, . . . ,

( ) 1, 1, . . . , ,

i

k

i i i k

i

f x f x

f x f x i m

g x i p

φ

φ ≤ =

= =

where the functions iφ are posynomials with nonnegative exponents, the functions

ijf are posynomials, and the functions ig are monomials. This is equivalent to the

following GP

00 01 0

1

1

minimize ( ( ), . . . , , ( ))

subject to ( ( ), . . . , , ( )) 1, 1, . . . ,

( ) 1, 1, . . . , , 1, . . . ,

( ) 1, 1, . . . , ,

i

k

i i i k

ij ij i

i

t x t x

t x t x i m

f x t i m j k

g x i p

φ

φ

≤ =

≤ = =

= =

(1.14)

with variables , , 1, . . . , , 1, . . . ,ij ix t i m j k= = . The equivalence follows from the fact

that the exponents of iφ are nonnegative, so it is nondecreasing in each of its arguments.

Page 31: KIM

15

Therefore we will have ( )ij ijt f x= at the optimum.

1.2 Previous Work

Geometric programming has extensively been used in VLSI design. The very inceptions

are the applications to transistor and wire sizing for Elmore delay minimization in digital

circuits as in TILOS [6]. Another traditional application is the placement and routing of

digital cell libraries as in GORDIAN [13], which was based on linear programming

1(LP) and L1-norm minimization, and [18].

In the area of analog and mixed-mode circuit design, convex optimization has been

used in several papers [15],[16] and most notably by M. Hershenson, S. Boyd, and T. Lee

[10]. The work was followed by a series of applications to various analog circuit design

including on-chip inductor [11], switched-capacitor circuit [7], pipelined analog-to-digital

converter [9], and phase-locked loop [4]. The authors formulate the specific circuit design

problem as a GP, and they obtain optimal sizes and bias currents by solving the GP. Since

GP is solved as a convex optimization problem after the transformation, it inherits all the

1 Linear programming is a subset of a geometric programming.

Page 32: KIM

16

advantages of convex optimization: globally optimal solutions with great efficiency,

unambiguous detection of infeasibility of given specification, sensitivity analysis via the

dual solution obtained, etc. While the technique has been shown to be applicable to

various analog designs, the limited accuracy of the device models and limitations in the

problem description can potentially lead to significant deviations from the true optimum.

This thesis follows the same basic approach of using a geometric programming for

optimizing analog circuit designs but focuses on several extensions to improve the errors.

1.3 Organization

The following chapters are organized as follows.

Chapter 2 illustrates the sources of error in GP-based analog circuit optimization

using a design example appeared in previous publication [10]. We describe two sources

of errors responsible for the discrepancy between GP optimization and SPICE simulation:

modeling error and bias estimation error.

Chapter 3 begins with the device modeling in GP and suggests a new modeling

method that enables us to achieve utmost limit of modeling accuracy. In addition, we

Page 33: KIM

17

introduce the concept of outlier classification to further improve model accuracy.

Chapter 4 describes several GP-description techniques that reduce the bias

estimation error while being aware of modeling errors.

Chapter 5 illustrates how the methods presented in chapter 3 and 4 are applied to the

design of two stage opamp and on-chip spiral inductor

We give concluding remarks in Chapter 6 by summarizing the contribution of this

work and possible future work.

Page 34: KIM

18

CHAPTER 2 Sources of Error in GP-based Design

GP-based circuit optimization involves the evaluation of the circuit performances in order

to quantify the optimality. This naturally leads us to obtain the prediction of optimal

performance specifications as by-products, but there always exists certain level of

prediction error. In this chapter, we examine the origin of prediction error in GP

optimization. For practical brevity, we use SPICE simulation as our benchmark for

silicon implementation but same principles are applied to the real silicon. Section 2.1 and

2.2 illustrates the sources of error by revisiting the work in previous publication. More

specifically, the section 2.1 illustrates how the modeling error affects the prediction error

in GP, and section 2.2 describes the bias estimation error and its impact on GP prediction.

Section 2.3 summarizes the chapter.

2.1 Modeling error

Page 35: KIM

19

M4M3 M6

M1 M2

M8

M5M7

Rc CcCL=1pF

Ibias

VDD

Vin+ Vin-

VDS4

VDS6

IDS4

IDS6

Vout

M4M3 M6

M1 M2

M8

M5M7

Rc CcCL=1pF

Ibias

VDD

Vin+ Vin-

VDS4

VDS6

IDS4

IDS6

Vout

Figure 2.1: Two stage op-amp with PMOS input differential stage

Previous work, [10], casts a 2-stage opamp design problem (Figure 2.1) as GP using

monomial device models. It obtains optimal sizes of all transistors, bias current Ibias and

compensation capacitor Cc as a result of solving GP optimization. Although GP

optimization produced excellent agreement with SPICE simulation for long-channel

transistors, this section will show that the predictions from GP optimization for short-

channel transistors can deviate considerably from the SPICE simulation. In order to

extend the work in [10] to the short-channel regime, we created new device models for

the TSMC 0.18-µm technology while keeping the same problem formulation as [10].

Page 36: KIM

20

Table 2.1: Max/mean percentage errors in monomial device model

Benchmark data are generated by sets of SPICE simulations. The monomial fitting

method, which is described in section 3.2 and Appendix A, is used to create required

device models. Note that the same approach can be applied to measured data.

Since accurate transistor characteristics in models such as BSIM3v3 are in general

complicated nonlinear functions, the approximation using a monomial leads to some

degree of modeling error. Note that the new monomial models are fitted as functions of

the drain-source voltage (VDS) in addition to width (W), length (L) and drain current (IDS)

since the characteristics of short channel device are affected considerably by the drain-

source voltage as well.

shows the modeling error in saturated devices. We listed max/mean percentage

modeling error ( GPmodel SPICE

SPICE100f f

f− × ) for selected monomial design parameters.

Errors are reasonably small for all characteristics except for the gds error where the mean

Model Parameter Variables % error in NMOS % error in PMOS

1/gm W,L,IDS,VDS 24.5/12.8 16.7/9.3

gds W,L,IDS,VDS 78.1/43.1 80.2/37.8

VOV (VGS-VTH) W,L,IDS,VDS 19.75/7.7 13.2/5.5

Page 37: KIM

21

is over 40%2. We note that these modeling errors translate into the prediction error. For

example, GP evaluates the inverse of the small signal gain by 1 1ds

V mg

A g= ⋅ , therefore

errors in the model parameter such as 1/gm and gds in result in prediction error of small

signal gain performances. Therefore, it is obvious that the result obtained from GP

optimization is only as reliable as the model we use and thus accurate device modeling

method that works in short-channel regime is required.

2.2 Bias estimation error

Table 2.2 illustrates another source of error, which we refer to as bias estimation error,

and its impact on the gain specification.

We point out that the way bias points are calculated in GP are quite different from

that of SPICE. In SPICE simulations, bias conditions (i.e. drain current and terminal

voltages of transistors) are determined through bias point calculation, which is essentially

to solve a set of simultaneous equations originated from KVL and KCL. The small-signal

performance specifications (i.e. small-signal gain) depend on the bias conditions. 2 gds(or ro equivalently) is a very sensitive parameter and difficult to model even in BSIM3v3

Page 38: KIM

22

GP SPICE

VDS4 [V] 0.79 0.61

VDS6 [V] 0.68 0.78

IDS4 [uA] 23.5 23.3

IDS6 [uA] 162.2 175.7

gds4=f(W4,L4,IDS4,VDS4) [ u ] 2.20 3.93

gds6=f(W6,L6,IDS6,VDS6) [ u ] 28.6 44.2

Small-signal Gain [dB] 70 58.11

Table 2.2: GP prediction and SPICE simulation discrepancy

In GP-optimization, we obtain the size (W and L), biasing (IDS and VDS) and small-

signal characteristics of all transistors simultaneously as a result of solving the problem.

Since small signal characteristics such as gds in Table 2.2 are functions of the biasing, the

errors in biasing (IDS and VDS) can result in significant deviation in the small-signal

performance specifications. Table 2.2 shows an example based on the 2-stage opamp

design in Figure 2.1. The prediction errors of biasing (VDS and IDS in first four columns in

Table 2.2) compounded with inherent model error of gds result in the prediction error of

Page 39: KIM

23

gds, consequently leading to a very large prediction error of small signal gain.

The prediction errors in biasing depend on how completely the bias conditions are

formulated in GP as well as the modeling errors of the large-signal device models (i.e.

voltage models). Therefore, the complete biasing description as well as accurate device

model is necessary for reliable GP optimization result.

2.3 Summary

In this chapter, we examined the origins of the errors in GP-based circuit optimization.

Based on the observations, we propose two methods to help minimize the prediction error.

First, in chapter 3, we minimize modeling error using a piecewise-linear modeling

method that achieves the highest fitting accuracy for design parameters. Second, in order

to minimize the bias estimation error, chapter 4 suggests a method to rigorously enforce

KVL and KCL in GP. Also, we describe a GP-formulation technique that can compensate

for the inherent modeling errors and, in turn, helps the GP predictions meet the

specifications

Page 40: KIM

24

CHAPTER 3 Device Modeling

Active and passive device modeling have been one of the most active research areas in

integrated circuit design. As for the MOS transistor, there are many varieties of models

ranging from simple squared-law model to state-of-the-art BSIM4 and EKV models. In

this chapter, we describe the device modeling from the perspective of GP optimization.

Section 3.1 presents the basic idea of device model that is compatible to GP optimization.

Section 3.2 reviews published methods in GP-compatible device modeling. Section 3.3

suggests piecewise-linear device modeling method that can achieve very accurate GP-

compatible device model. In section 3.4, we explore the concept of outlier classification

to improve the model accuracy even further. Section 3.5 summarizes the chapter.

3.1 GP-compatible device model

In order to cast the circuit design problem as a GP-optimization problem, device model

Page 41: KIM

25

also has to conform to the requirement of GP since GP can only handle specific type of

constraints: posynomial inequality and monomial equality. As such, we need to create

special device models that are not only compatible with GP optimization but also

sufficient to describe the circuit design problem as GP. We refer to such device model as

GP-compatible device model. The idea is best illustrated by the following example.

One of the typical design constraints in amplifier design is to give a lower bound on

the small-signal gain. Since gain is the product of transconductance (gm) and output

impedance (ro), the condition is usually the following form

min1

m o mds

g r g Ag

⋅ = ⋅ ≥ (3.1)

where gds is drain-source conductance and minimum gain is denoted by Amin. By simple

manipulation, the expression (3.1) becomes the following inequality

min1 1dsm

A gg

⋅ ⋅ ≤. (3.2)

We note that small-signal design parameters such as 1/ mg and dsg are functions

of the numerous physical variables. Given fixed process parameters, geometry variables

such as width (W) and length (L) and bias variables such as terminal voltages (VDS or VGS)

and drain current (IDS) have direct impacts on these parameters. We select W, L, IDS, and

Page 42: KIM

26

VDS , which are independent, as our design variables such that we have the following

relationship.

1

2

1/ ( , , , )( , , , )

. . .

m DS DS

ds DS DS

g f W L I Vg f W L I V

== (3.3)

When returning to the gain specification example, (3.2) remains a posynomial

inequality as long as 1/ mg and dsg are either monomials or posynomials of design

variables, i.e.,

11 2 3 4

21 2 3 4

1/ ,1

,1

1/ k k k k

k k k kds

Km gm k DS DS

k

Kds g k DS DS

k

g c W L I V

g c W L I V

α α α α

β β β β

=

=

= ⋅ ⋅ ⋅ ⋅∑

= ⋅ ⋅ ⋅ ⋅∑

(3.4)

Note that mg and 1/ dsg being posynomials do not result in a posynomial inequality

because a posynomial is not closed under division.

From the above example, we can summarize the device modeling method compatible to

GP as follows. First, we need to generate design parameter models such as 1/gm and gds ,

not just gm and 1/gds, in order to write design problem as GP-compatible inequalities.

Second, the selection and type of design parameters are not arbitrary, but depend on

where and how they appear in actual design constraints. In other words, we need to write

the design problem as GP before determining which modeling parameters are needed

Page 43: KIM

27

Models Explanation Required Type Dependency

1/gm Inverse of transconductance Posynomial, Monomial W,L,IDS,VDS

gds Inverse of output resistance Posynomial W,L,IDS,VDS

VGS Gate-source Voltage Posynomial, Monomial W,L,IDS,VDS

VGOV VGS-VTH Posynomial, Monomial W,L,IDS,VDS

VDSAT Min. required VDS for saturation Posynomial W,L,IDS,VDS

CGS Intrinsic gate capacitance Posynomial, monomial W,L,IDS

CGD Gate-drain overlap capacitance Posynomial, monomial W

CJD Drain-bulk junction capacitance Posynomial, monomial W,VDS

CJS Source-bulk junction capacitance Posynomial, monomial W

Table 3.1: GP deivce model parameters for MOS transistor

Table 3.1 shows some typically device models for an MOS transistor, the required type of

the models, and their dependency on design variables.

3.2 Review of published approaches

Page 44: KIM

28

There have been a few literatures that addressed the GP-compatible device modeling such

as [8] and more recently [5]. In this section, we review the published approaches about

device modeling in GP.

3.2.1 Monomial models from physical law

Monomial functions are very conveniently used in GP formulation because monomial is

closed even under division. In some, although rare, cases monomial models can be

directly derived from the physical laws. For example, let’s consider transconductance of

the squared-law equation. One can easily derive that the transconductance is given by

(3.5).

0.5 0.5 0.5

2

2

m n ox D

n ox D

Wg C IL

C W L I

µ

µ −

= ⋅ ⋅ ⋅ ⋅

= ⋅ ⋅ ⋅ ⋅ ⋅ .

(3.5)

The expression (3.5) turns out to be a monomial function of W, L and ID . Note that 1/gm

is also a monomial function.

Although simple and easy to use, these monomial device models are not very useful

because this simple squared-law equation does not represent the physical behavior of

Page 45: KIM

29

today’s transistor accurately, making the result of the optimization unrealistic.

3.2.2 Monomial models via numerical fitting

Potentially, the better method is to find the best monomial function ( )i if x y≈ via

numerical fitting for m observed or simulated values of

( , ), 1, . . . , , , ki i i ix y i m y x= ∈ ∈R R . In other words, we want to solve the following

optimization problem

11

1

minimize ( )

subject to ( ) ,

with variable , , . . . ,

k

i i p

k

k

f x y

f x c x x

c

α α

α α+

= ⋅ ⋅ ⋅ ⋅

∈ ∈R R

(3.6)

where p⋅ is known as “p-norm”. p-norm is a convex function and can be used to

measure the distance of its argument. The details on norm minimization is described in

Appendix A.

A simple change of variables transforms the seemingly complex (3.6) into a linear

optimization problem that can be solved very easily. Taking the logarithm in both side of

the monomial function of (3.6) results in (3.7), which is a linear function of log ix .

Page 46: KIM

30

1 1 2 2log ( ) log log log logk kf x x x x cα α α= ⋅ + ⋅ + ⋅⋅⋅+ ⋅ + (3.7)

Hence by using the variable logi ix x= instead of xi, entire problem (3.6) becomes

linear fitting problem. The details of solving linear fitting problem are described in

Appendix A.

3.2.3 Posynomial models from fitting

Recall that in Table 3.1, some of the design parameters can be posynomial functions.

Since posynomial function is a sum of several monomial functions, the model can be

fitted to actual characteristics with greater accuracy. Unfortunately, there has been no

easy and obvious convex formulation for posynomial fitting to the author’s knowledge.

Only a few literatures addressed posynomial fitting. Most notably, the work by W. Dames

[5] proposes the idea of generating posynomial model based on fitting. However, their

work is based on “template estimation”, which is to estimate exponents and number of

terms of posynomial apriori, hence it is hard to prove the optimality of the fitting.

Another method described in [8] is based on Newton’s method, which is a general

nonlinear optimization method, and cannot guarantee the global optimality of the fitting.

Page 47: KIM

31

The accuracy can depend significantly on the initial estimate of the exponents, coefficient,

and number of terms in posynomial function.

3.3 Piecewise-Linear (PWL) model

In this section, we propose an alternative fitting algorithm which achieves the smallest

possible device modeling error for a given set of data. Unlike the methods mentioned in

the previous section, this method does not require heuristic interventions, and therefore

can achieve globally-optimal model fitting accuracy. Section 3.3.1 describes the

underlying principle in this method and section 3.3.2 proposes the practical variation of

original method that reduces computational complexity. Section 3.3.3 illustrates the idea

by applying the method to the modeling of 0.18-um CMOS device.

3.3.1 Principle and method

First, the piecewise-linear function 1( ) max{ , . . . , }T TK Kf x a x b a x b= + + defined in (1.12)

Page 48: KIM

32

Figure 3.1: The idea of PWL function fitting to the convex function

is a convex function because PWL function is the pointwise maximum of linear functions

and linear functions are convex. Second, we note that data set ( , ), 1, . . . ,i iy f i m=

generated by an arbitrary convex function ( )i if f y= can be fitted by a PWL function

with an arbitrarily small fitting error if we are allowed to tailor the PWL function with an

arbitrarily large number of segments. An easy example of this argument is the maximum

of tangential planes defined at every iy . In other words, the PWL function

max Ti i

if a y b= + where T

i ia y b+ represents tangential planes will exactly pass through

Page 49: KIM

33

the original data sets ( , ), 1, . . . ,i iy f i m= with no fitting error. Figure 3.1 illustrates this

idea.

Because posynomial function becomes convex after the logarithmic transformation

as shown in section 1.1.4, we can replace the posynomial function by PWL function in

the logarithmic variable and functional value space. The following describes how such

PWL function can be created for a given data set ( , ), 1, . . . ,i iy f i m= . We cast a PWL

function fitting problem as the following linear optimization problem [2]

1

minimize

subject to ( ), , 1, . . . ,

with variable , , . . . ,

for given data set ( , ), 1, . . . ., where ,

p

Tj i i j i

m km

k mi i i

f f

f f g y y i j m

f g g

y f i m y f

≥ + − =

∈ ∈

= ∈ ∈

R R

R R .

(3.8)

In (3.8), for a given ( , )i iy f we would like to find if as close as possible to if ,

but only under the condition that all if are interpolated by PWL function

( ) { ( )}max Ti i i

if y f g y y= + − . We find m planes that pass through ( , )i iy f with the slope

ig while minimizing the fitting error denoted byp

f f− . The resulting PWL function

can be recovered by

1,..., 1,...,

( ), max ( ( )) max ( )T Tfitted i i i i ii m i m

f y f g y y a y b= =

= + − = + (3.9)

Page 50: KIM

34

Figure 3.2: Illustration of reduced complexity method: (a) given data set, (b) PWL function with all planes, (c) PWL function with fewer planes

where , Ti i i i i ia g b f g y= = − .

Since the fitting is performed in log-domain, we need to express (3.9) in terms of

real-domain variables iyix e= in order to use (3.9) in GP. This is shown as follows

1 2

1 21,...,

th

( ), exp( (log ), ) max

with exp( ) and component in

i i ika a ai ki m

ki i ik i

f x fitted f x fitted c x x x

c b a k a

== = ⋅ ⋅ ⋅⋅⋅

= = ∈ R .

(3.10)

Another meaningful interpretation from is that the PWL function is in fact the

maximum of m monomial functions in terms of original variable x.

Page 51: KIM

35

3.3.2 Reduced complexity algorithm

The challenge in (3.8) is the size of the problem. Since the number of inequalities in

(3.8) grows by m2, solving this optimization problem becomes impractical when dealing

with a large data set (m > 10,000). Many practical variants can be contrived to reduce the

size of the problem. For instance, one may relax the original problem in (3.8) by

comparing only data points in the vicinity. Another method is to equate some of the gi so

that we end up with less number of plane-defining data points.

In this section, we present one possible variant with a problem size that grows

linearly with m. The origin of the idea stems from the assumption that we might need

significantly less number of planes than m planes that are used in original method. Figure

3.2 illustrates the idea. Figure 3.2- ( )b is the PWL function fitting with 5 planes, but it is

possible that PWL with fewer planes achieves acceptable fitting error as in ( )c , where

we used only 2 planes. In essence, we don’t need a plane at the data point when it is

extrapolated by the linear functions that are defined at adjacent data points.

The algorithm to create such PWL function is illustrated in Figure 3.3. This method

heuristically finds a near-optimal subset of planes. The method begins with a small subset

Page 52: KIM

36

1

1 1

minimize

subject to ( ), , 1, . . . ,

with variable where n is the number of element of , , . . . ,

for data set ( , ), 1, . . . ., where ,

p

Tj i i j i

n km

k mi i i

f f

f f g y y i S j m

f S g g

y f i m y f

≥ + − ∈ =

∈ ∈

= ∈ ∈

R R

R R

1. Solve the following problem

2. Calculate the fitting error. If errors are acceptable, quit

3. Otherwise, find the index i among m data that causes maximum fitting error in step 2

4. Add i to S1 and go to step 1

Figure 3.3: Reduced complexity PWL fitting algorithm

of planes defined as the set 1(S1) and solve the PWL fitting problem. This step is denoted

by step 1 in Figure 3.3. The next step is to calculate fitting error with the PWL model

obtained in step 1. If fitting error is not acceptable, we iteratively add more planes to

achieve smaller fitting errors. The planes are added at a point ( , )i iy f that is responsible

for the largest fitting error.

Page 53: KIM

37

Parameter Monomial PWL(I) PWL(II)

1/ mg 24.5/12.8 0.45/0.27 3.9/1.1

dsg 78.1/43.1 25.5/15.6 24.9/10.8

GSV 8.3/3.5 0.38/0.24 1.5/0.38

GSC 9.5/5.9 1.05/0.64 1.05/0.5

Table 3.2: Max/mean percentage errors in design parameter models for TSMC 0.18-um saturated NMOS device

Parameter Monomial PWL(I) PWL(II)

1/ L 16.7/4.2 0.58/0.36 4.03/1.09

1/ Q 41.0/12.6 14.8/8.5 22.1/8.53

Table 3.3: Max/mean percentage errors in on-chip inductor model for TSMC 0.13-um technology

3.3.3 Example

The algorithms described in previous sections are implemented in MATLAB and tested

with the NMOS device in the TSMC 0.18-um technology and on-chip spiral inductor in

the TSMC 0.13-um technology. MOSEK [12] is the optimization engine.

Page 54: KIM

38

[Formatting]

Figure 3.4: Fitting errors vs # of planes in table 3.2

The optimizations are performed on a Xeon 2.8-GHz CPU with 2-GB memory

running Linux. The PWL model fitting for 1000 data points is completed in less than 10

minutes. Table 3.24 and Table 3.3 shows the improvements in the modeling errors over

monomial models. PWL(I) refers to the original method in (3.8) and PWL(II) refers to

the reduced complexity method illustrated in Figure 3.3. SPICE and ASITIC [17]

generate the data used in table I and table II respectively. Figure 3.4 illustrates the

4 The percentage errors using our model fitting for most parameters (i.g. 1/gm) are near the order of errors for the BSIM model fitting.

Page 55: KIM

39

tradeoff between number of planes used in PWL(II) method and the fitting error.

Approximately 70 planes are sufficient in most cases to achieve the near-optimal result.

As the last note, we point out that PWL functions are equivalent to a set of linear

inequalities, thus are readily handled in GP. The details on using PWL device models in

GP are described in section 5.1.7.

3.4 Outlier classification

Although the PWL model that was discussed in previous section can achieve greater

accuracy than the monomial model, there are inevitable cases where we must use

monomial models. For example, in current-mirrored devices we describe them as having

same overdrive voltages, i.e., 1 2OV OVV V= . Because GP only allows monomial functions

for equality constraints, we are forced to use monomial model of OVV whose modeling

errors can be quite significant. In this section, we describe a method that can further

minimize fitting error by selectively eliminating points with large modeling errors. In

3.4.1, we study the tradeoff between variable space and fitting error as an underlying

Page 56: KIM

40

principle. Section 3.4.2 presents how we exploit the tradeoff studied in 3.4.1 in order to

L

W 0.22u ~ 0.6u 0.6u~1.3u 1.3u~10.1u 10.1u~101u

0.18u~0.5u Bin 1 Bin 4 Bin 7 Bin 10

0.5u~1.2u Bin 2 Bin 5 Bin 8 Bin 11

1.2u~21u Bin 3 Bin 6 Bin 9 Bin 12

Table 3.4: Binning map for TSMC 0.18um BSIM3v3 model

enhance the model accuracy. Section 3.4.3 shows an example.

3.4.1 Tradeoff between fitting error and variable space

In any device model that has parameters tuned by numerical fitting, there exist an

obvious tradeoff between the variable space and fitting error. For instance, typical BSIM

model depends on the binning of the model, which is to divide the regions of width and

length space into several bins. Table 3.4 shows a typical binning for BSIM3v3 model. We

expect the improvement in model accuracy because binning enables us to fit the model

Page 57: KIM

41

equation to the smaller ranges of variables, leading to locally-optimal sets of equations.

Maximum/Average Modeling Error Variable range

Minimize average error Minimize worst-case error

VOV,CASEI

2 25

5

2 6.8

0.3 1.2

DS

DS

m W m

L m

A I mA

V V V

µ µ

µ

µ

≤ ≤

≤ ≤

≤ ≤

≤ ≤

0.18µm

43.7%/5.0%

(# of outliers: 95 out of 961)

18.8%/8.0%

(# of outliers: 376 out of 961)

VOV, CASEII

2 25

5

2 6.8

0.3 1.2

DS

DS

m W m

L m

A I mA

V V V

µ µ

µ

µ

≤ ≤

≤ ≤

≤ ≤

≤ ≤

0.5µm

10.6%/2.7%

(# of outliers: 8 out of 768)

7.1%/3.9%

(# of outliers: 0 out of 768)

Table 3.5 Monomial modeling for VOV in TSMC 0.18-um technology (Outliers represent

the data point whose modeling error is beyond 10%).

We can therefore say that there is a tradeoff between the range of variable space and

fitting error. The same tradeoff is observed in GP-compatible device models as shown in

Table 3.5. The table describes the results about monomial modeling of gate-overdrive

voltage (VOV=VGS-VTH) in 0.18-um TSMC technology for two different variable ranges.

We note that case II, which has a higher lower-bound of the length, achieved better

accuracy.

Page 58: KIM

42

It is worthwhile to mention at this point that the idea of binning does not easily

apply to the GP. Even though we build binned GP-compatible device models just as

BSIM, it is difficult to employ these models into the optimization process. More

specifically, since GP optimizer has to explorer the entire design variable space to find

the global optimum, it is hard to prove the global optimality of the result from binned

models. The method we will suggest in section 0 deals with the entire design space as a

whole, hence is still compatible with GP optimization.

3.4.2 Domain and outlier

It is instructive to clarify two terms associated with the modeling and classification: a

domain and an outlier.

Domain of function g(x), denoted by dom g, is a range of variables where the

function is defined. Note that the domain of the convex function must be a convex set,

which is a requirement for the function to be convex. For example, the domain of the

model VOV,CASEI in Table 3.5 can be defined as follows

Page 59: KIM

43

{( , , , ) | 2 25 , 0.18 5 ,

2 6.8 , 0.3 1.2 ,

}

DS DS

DS DS

DS DSAT

X W L I V m W m m L m

m I mA V V V

V V

µ µ µ µ

µ

= ≤ ≤ ≤ ≤

≤ ≤ ≤ ≤

> .

(3.11)

The lower and upper bound of the each variable come from the range of variables

used in model fitting. The last condition, DS DSATV V> , corresponds to the condition for

the transistor to be in saturation. Since we model VDSAT as PWL function, set X is the

intersections of the convex sets therefore convex.

In the model-fitting context, an outlier is a data point for which the fitting error is

larger than our expectation. In case of monomial function, this is often associated with

the incapability of monomial function. For instance, PWL fitting for 1/ mg shows very

small fitting error whereas the monomial fitting for 1/ mg has relatively larger error.

In Table 3.5, we set the threshold to be 10% and the total numbers of outliers are denoted

for each case. We observe the fewer outliers in case of VOV,CASE II ,which translates to the

better fitting accuracy. Again, this improvement benefits from the reduced design variable

space through possibly not using the entire available design space provided by the

technology. This is possible because bias points for many analog circuits are typically

constrained to a small portion of the entire variable range.

Page 60: KIM

44

Figure 3.5: Examples of data versus error. Filled circles represent the data we trust and

the empty ones are outliers: (a) the case we can discriminate outliers, (b) the case we

cannot discriminate outliers in the 2-dimensional variable space.

3.4.3 Outlier classification

From above observations, exploiting the tradeoff judiciously is a way of improving the

model accuracy. In other words, we can re-define the domain of the model function such

a way that we filter out the worst offending outliers in the design variable space without

overly constraining the design space.

Page 61: KIM

45

Let’s consider the following example in Figure 3.5. The circles are data points. The

filled circles represent the data point whose modeling error is acceptable, and the empty

circles are the outliers. In the case of (a), it is clear that there exist a feasible convex set

that does not contain outliers. On the other hand, case (b) doesn’t lend itself to have a

convex set that can discriminate the outliers completely. We can only imagine a convex

set that contains as few outliers as possible.

One way to find such a set is to take advantage of concept of sublevel set. The α -

sublevel set of a function : nf →R R is defined as [2]

{ | ( ) }C x f f xα α= ∈ ≤dom .

Sublevel sets of a convex function are convex, for any real value of α .The proof

cna be extended directly from the definition of convexity. As such, we can discriminate

the outliers if we are able to find a convex function ( )f x whose sublevel set does not

contain any outlier. Finding such a convex function can be cast as the following

feasibility problem for a given outlier information

minimize 0

subject to ( ) is convex

( ) 1, outliers

( ) 1, otherwise

f x

f x x

f x

≥ ∈

≤ − .

(3.12)

Page 62: KIM

46

Any ( )f x obtained from the solution satisfying the constraints in (3.12) is able to filter

out the outliers. The nonstrict inequality ( ) 1 and ( ) 1f x f x≤ − ≥ in (3.12) comes from

the transformation of strict inequality into nonstrict inequality. In section 3.3 we noted

that the condition that “ ( )f x is convex” is replaced by a set of linear inequalities as in

(3.8), therefore the feasibility problem (3.12) can be cast as a linear optimization

problem.

As pointed out earlier, (3.12) might be infeasible in practice because the outlier can

reside at the interior of the domain as in Figure 3.5-(b). In this case, we slightly modify

(3.12) such that the domain defined by sublevel set can contain outliers and we minimize

the number of those outliers in the optimization problem. This is accomplished by adding

slack terms to the inequalities, which is called support vector classification [2]. The

resulting problem is

minimize

subject to ( ), , 1, . . . ,

1 , outliers

1 , outliers

0

ii

Tj i i j i

i i i

i i i

i

s

f f g x x i j m

f s x

f s x

s

≥ + − =

≥ − ∈

≤ − + ∉

(3.13)

The interpretation is as follows. Our goal is to find if , ig , and hopefully sparse

Page 63: KIM

47

Figure 3.6: Minimum volume ellipsoid that contains a finite convex set

nonnegative s that satisfy the inequalities (3.13). We minimize the summation of is

because any positive 2is > is considered as the outlier residing in the domain defined

by ( ) max { ( )} 1Ti i i if x f g x x= + − ≤ − , therefore minimizing the sum of si leads to the

best convex set { | ( ) 1}x f x ≤ − that allows as few as possible outliers.

3.4.4 The metric for variable space reduction

In discriminating outliers, we are concerned about the amount of variable space reduction

because we do not want to overly constrain the design space. In order to quantify the

reduction of variable space, we define a numerical metric that can capture the volume of

the design space. Since the design space can be an arbitrary shape in n-dimensional

Page 64: KIM

48

variable space, it is not easy to calculate the exact volume of the design space. Instead,

our metric is the volume of an ellipsoid that contains the data set as shown in Figure 3.6.

Finding the minimum volume ellipsoid that contains the finite set

1{ , . . . , } nmC x x= ⊆ R can be formulated as

1

2

minimize log det

subject to 1, 1, . . . ,i

A

Ax b i m

+ ≤ = (3.14)

where the variables are nA ∈ S , nb ∈ R , and 0A is an implicit constraint [2].

Equation (3.14) can be solved as semi-definite programming (SDP) programming which

is also a convex optimization problem. From the obtained solution, the ellipsoidal

approximation of the volume of the design space can be found as 1log detV A−= .

3.4.5 Example of outlier classification

The algorithms described above are implemented in MATLAB and tested for VOV

monomial function of the TSMC 0.18-um NMOS device. Figure 3.7 illustrates the

procedure of the suggested method. First, we fit the model equation via L1 optimization

in order to minimize the average error. A thin outlier tail results in the error histogram.

Page 65: KIM

49

Modeling via L1 Minimization

Simulated or measured data

Outlier information

Outlier Classification Process Convex filter function

Model Reoptimization via Linf minimization

( )f x

Final model equation

Modeling via L1 Minimization

Simulated or measured data

Outlier information

Outlier Classification Process Convex filter function

Model Reoptimization via Linf minimization

( )f x

Final model equation

Figure 3.7: Model refinement via outlier classification

Modeling via L1 Minimization

Simulated 961 data for VOV

# of outliers : 95

Outlier Classification Process convex filter function Misclassified data: 1

Model Reoptimization via Linf minimization

( )f x

Max Fitting Error: 43.7%Avg. Fitting Error: 5%

Max Fitting Error: 9.4%Avg. Fitting Error: 4.15%

Modeling via L1 Minimization

Simulated 961 data for VOV

# of outliers : 95

Outlier Classification Process convex filter function Misclassified data: 1

Model Reoptimization via Linf minimization

( )f x

Max Fitting Error: 43.7%Avg. Fitting Error: 5%

Max Fitting Error: 9.4%Avg. Fitting Error: 4.15%

Figure 3.8: Application example on VOV monomial model

Page 66: KIM

50

We then proceed to the outlier classification which produces the best convex function that

filters the data as described in (3.13). After obtaining the filtering function, we re-

optimize the model with Linf minimization to reduce worst-case modeling error.

The method has been applied to the VOV monomial model of saturated NMOS device

in TSMC 0.18-um technology. The result in Figure 3.8 shows that given 95 outliers out of

961 data points, we were able to produce a discriminating convex function with only one

misclassified data point.

3.5 Summary

This chapter describes the comprehensive theory of device modeling for GP-based circuit

optimization. We propose a piecewise-linear function fitting to improve the device

modeling accuracy. For functions with large remaining errors, we introduce a method of

outlier classification to optimally constrain the design space and minimize error.

Page 67: KIM

51

CHAPTER 4 GP Description Technique

As mentioned in chapter 2, the GP-prediction error stems also from bias estimation errors.

This chapter addresses several GP description techniques that help reduce GP prediction

errors. Specifically, section 4.1 illustrates the need of posynomial equality constraints and

demonstrates a proposed solution. Section 4.2 presents the GP description style that is

more robust to the inherent modeling errors. Section 4.3 summarizes the chapter.

4.1 Bias description as posynomial equality constraints

Bias conditions description often needs equalities. One of the most common bias

conditions is KVL, i.e., 0V =∑ , which is equality condition. For instance, the bias

conditions for Figure 4.1 require that the sum of drain-source voltages equals to VDD.

Page 68: KIM

52

M1

M2

VDD

Vbias

Vin

Vout

Figure 4.1: Common-source amplifier with PMOS current-source load

Because VDS is a design variable in GP device model, this constraint becomes following

posynomial equality

1 21 ( ) 1DS DSDD

V VV

+ =. (4.1)

In general, GP cannot deal with posynomial equality constraints. However, in some

very special cases it is possible. For instance, constraints of the form ( ) 1h x = , where h is

posynomial function of variable x, is acceptable. We explore the idea as follows. Consider

the optimization problem

0minimize ( )

subject to ( ) 1, 1, . . . ,( ) 1,i

f xf x i mh x

≤ ==

(4.2)

where ( )if x and ( )h x are posynomial function with variable x. Unless h is monomial,

this is not a GP. Then consider the related problem

Page 69: KIM

53

0minimize ( )

subject to ( ) 1, 1, . . . ,( ) 1,i

f xf x i mh x

≤ =≤

(4.3)

where the posynomial equality has been relaxed to a posynomial inequality. This problem

is of course GP. Now suppose we can guarantee that at any optimal solution x∗ of the

problem (4.3) we have *( ) 1h x = , i.e., the inequality ( ) 1h x ≤ becomes active at the

solution. Then by solving the GP problem (4.3), we essentially solve the non-GP

problem (4.2) One can show that the necessary condition is when there is a variable xr

such that

f0 is monotonically increasing in xr

f1, . . . , fm are nonincreasing in xr

h is monotonically decreasing in xr

The proof is as follows. Because we want to minimize f0, the optimizer pushes xr as

small as possible because f0 increases as xr decrease. For f1, . . . , fm , they are

nonincreasing in xr therefore making xr smaller does not degrade the inequalities

( ) 1if x ≤ . What limits the decrease of xr is ( ) 1h x ≤ because reducing xr will increase h,

so at some point we cannot push xr any lower in order to avoid violating ( ) 1h x ≤ .

Therefore ( ) 1h x ≤ becomes always active at the optimum. Likewise, one can show the

Page 70: KIM

54

exact same principle applies to the following case.

f0 is monotonically decreasing in xr

f1, . . . , fm are nondecreasing in xr

h is monotonically increasing in xr

We refer to this special property as monotonicity. By utilizing monotonicity, it can

be shown that we activate KVL in most cases.

Using the example in Figure 4.1, we consider the following problem described in

(4.4)~(4.6), which is a part of the CS-amplifier design problem written as GP.

1 1 2minimize 1/ ( )m ds dsg g g⋅ + (4.4)

1 21subject to ( | |) 1DS DSDD

V VV

⋅ + ≤ (4.5)

11 1( ,min) 1DSAT DSV Vout V −+ ⋅ ≤ (4.6)

. . .

(4.5) becomes always active as long as the other inequalities and objective values such as

(4.4) and (4.6) improve or do not degrade as VDS,M1 or |VDS,M2| grow. In this example, if

1dsg , 11/ mg and 1DSATV decrease as VDS,M1 grows, (4.4) and (4.6) only improves as VDS1

grows. Therefore at the optimum, (4.5) is always active.

This idea can be easily extended to activate more general KVL “equalities”. A more

Page 71: KIM

55

detailed example is discussed in section 5 with the example of a two-stage opamp circuit

optimization.

Note that this implies that we must constrain the values of the exponents of the fitted

model. In the previous example, exponents of VDS in 1dsg , 11/ mg , and 1DSATV must be

nonpositive. However, this limitation does not degrade the accuracy of the model in the

designs we have studied.

4.2 GP description style robust to the modeling error

In previous sections, we have explored various methods to reduce the prediction errors by

having accurate device model and better problem formulation. In select cases, there are

device parameters (most notably gds) where modeling errors are relatively large even in

PWL fitting. Also, in analog circuit design problems, there are unavoidable cases where

monomials must be used for the equalities. We propose a GP description style to cope

with these errors by expressing the design problem in a more robust fashion. The

following gives two examples.

First, consider again the common-source amplifier design problem in Figure 4.1.

Page 72: KIM

56

Supposing that we would like to give a minimum bound on the small-signal gain. This

requires the following GP constraint.

1 1 2min 1/ ( ) 1m ds dsA g g g⋅ ⋅ + ≤ (4.7)

where Amin denotes the minimum small-signal gain.

Assume that the model results in maximum percentage modeling errors for

1 1 21/ , , andm ds dsg g g of , , andα β γ respectively. This uncertainty in the model functions

can be modeled as (4.8) which retains the posynomial inequality.

1 1 2min (1 ) 1/ ((1 ) (1 ) ) 1m ds dsA g g gα β γ⋅ + ⋅ ⋅ + ⋅ + + ⋅ ≤ (4.8)

Similar method can also apply to the bias condition. As will be illustrated in chapter 5,

current-mirroring devices have equal gate overdrive voltages. For instance,

1 2/ 1OV OVV V = . (4.9)

With the maximum percentage modeling error δ in VOV, (4.9) can be modified to

1 2/OV OVV V γ= (4.10)

where 1 1[ , ]1 1

δ δγδ δ

− +∈+ −

. We then solve a GP problem that simultaneously checks all

cases of this equality: the two extreme cases (4.10) and one nominal case.

Page 73: KIM

57

1 2

1 2

1 2

1/ ,1

/ 1,

1/ ,1

OV OV

OV OV

OV OV

V V

V V

V V

δδ

δδ

−=+

=

+=−

(4.11)

The solution guarantees that minimum performance specifications are satisfied as long as

the range of the modeling errors of VOV is less than δ .

It is noteworthy that this technique results in a bigger problem size, and potentially

causes over-design. The optimum is some (often slight) distance away from the true

optimum without modeling errors. To avoid significant over-design, the strategy should

be used judiciously for specifications that are more sensitive to modeling errors. By using

this method, modeling errors are addressed in a more predictable way.

4.3 Summary

In this chapter, we explored useful GP description techniques that help formulate GP in

more complete and robust manner. Bias-related posynomial equalities are activated by the

Page 74: KIM

58

concept of monotonicity. We also showed a GP description style that is embeds inherent

modeling errors so that performance requirements are still satisfied.

Page 75: KIM

59

CHAPTER 5 Design Applications

Previous chapters have investigated several error-reduction methods in GP-based circuit

optimization. This chapter applies the methods presented in previous chapters to real

design examples. Section 5.1 and 5.2 illustrate the design of a two-stage opamp and an

integrated inductor via geometric programming, respectively.

5.1 Two stage OP-AMP design

The two-stage op-amp design in Figure 2.1 is one of the most commonly used general

purpose opamp and its design was formulated as GP in [10]. In this work, it is rewritten

and modified using the proposed error reduction techniques.

We use two classes of transistor models. One class is the device whose bulk and

source are connected to the same potential (M3, M4, M5, M6, M7, M8). These transistors

have 4 design variables: width (W), length (L), drain current (IDS), and drain-source

Page 76: KIM

60

voltage (VDS). Device models are created as described in Table 3.1. The other class

regards the transistors with 0SBV ≠ (M1, M2). These transistors have source-bulk voltage

(VSB) as an additional design variable in addition to the aforementioned four variables.

Overall, there are 36 design variables:

- W, L, IDS, VDS of M3 ~ M8. They are denoted as W3, L3, IDS3, VDS3, etc.

- W, L, IDS, VDS ,VSB of M1 and M2.

- The bias current Ibias

- Compensation capacitor Cc

Note that the compensation resistor Rc can be chosen such that the zero in the transfer

function is canceled out, and therefore is not a independent design variable.

In describing design constraints, we denote the type of model by the subscript, i.e.,

piecewise linear model of 1/gm in transistor M1 is denoted by 1/gm1,PWL and monomial

model of VOV in transistor M3 is denoted by VOV1,MON. Also, we will denote |VDS|, |IDS| and

|VSB| for PMOS device as just VDS , IDS and VSB for notational simplicity.

5.1.1 Symmetry and matching

Page 77: KIM

61

Differential circuits have identical devices. In this opamp, M1-M2 and M3-M4 pairs must

be identical. These are translated into

1 1 1 2 1 2 1 2 1 2

3 3 4 4 3 4 3 4

/ , / 1, / 1, / 1, / 1

/ , / 1, / 1, / 1

DS DS DS DS SB SB

DS DS DS DS

W L L L I I V V V V

W L L L I I V V

= = = =

= = = (5.1)

Current mirrors usually employ the same lengths in order to avoid mirroring errors.

This translates into

5 7 8L L L= = . (5.2)

Equation (5.1) and (5.2) are monomial equalities, and are handled by GP.

5.1.2 Biasing and circuit topology

The constraints below maintain all the devices in saturation.

11 1,

18 8,

( ) 1....

( ) 1

DS DSAT PWL DSAT

DS DSAT PWL DSAT

V V V

V V V

⋅ + ∆ ≤

⋅ + ∆ ≤

(5.3)

where DSATV∆ is the predetermined margin in VDS to avoid operating in the linear region.

In this work, it is set to be 100mV. Since VDS is one of the design variables, Inequalities in

(5.3) are posynomial inequalities provided that VDSAT is a PWL function because

Page 78: KIM

62

posynomials are closed under multiplication.

Diode-connected devices (M8,M3, and M4) have an additional constraint for biasing.

18, 4

13, 4

14, 4

1

1

1

GS MON DS

GS MON DS

GS MON DS

V V

V V

V V

⋅ =

⋅ =

⋅ =

(5.4)

These equations are monomial equalities when a monomial model of VGS is used.

The drain-source voltages (VDS) in all transistors should satisfy KVLs which are

given by the circuit topology. There are three KVL loops that are associated with VDS of

transistors in the design.

5 1,

, 3 1 5

6 7

1 (| | | |) 1,

1 ( | | | |) 1

1 ( | |) 1

DS GS PWLDD

DS M DS DSDD

DS DSDD

V VV Vin cm

V V VV

V VV

⋅ + =−

⋅ + + =

⋅ + =

(5.5)

Vin,cm is the common-mode voltage at the input. Because the equations in (5.5) are

posynomial equalities, we activate them by utilizing monotonicity as described in section

4.1. (5.5) can be rewritten as follows:

Page 79: KIM

63

5 1,

, 3 1 5

6 7

1 (| | | |) 1,

1 ( | | | |) 1

1 ( | |) 1

DS GS PWLDD

DS M DS DSDD

DS DSDD

V VV Vin cm

V V VV

V VV

⋅ + ≤−

⋅ + + ≤

⋅ + ≤

(5.6)

To ensure that (5.6) is active at the optimum, we make sure that all the other inequalities

and object function in the problem improve as the VDSs in (5.6) grow.

For the current mirrored devices, they share the same gate-source voltage, or

equivalently same overdrive voltages. We represent them as monomial equalities

8, 5, 7,OV MON OV MON OV MONV V V= = . (5.7)

KCLs are also specified as monomial equalities

1 2 3 4 5

6 7

0.5DS DS DS DS DS

DS DS

I I I I I

I I

= = = = ⋅

= . (5.8)

Finally, the connection between the 1st and 2nd stages translates into

4 6,DS GS MONV V= (5.9)

which is a monomial equality.

5.1.3 Output swing

Page 80: KIM

64

At the output, the voltage swing specifications are applied to M6 and M7. They set the

transistor operate in saturation under large voltage swing in the output. The GP

expressions are

16, ,min

17, ,min

,min ( ) 1

( ,max) ( ) 1

DSAT PWL DSAT

DD DSAT PWL DSAT

Vout V V

V Vout V V

⋅ + ∆ ≤

− ⋅ + ∆ ≤ (5.10)

which are posynomial inequalities.

5.1.4 Quiescent power

The quiescent power of this op-amp is given by

5 7( )DD biasP V I I I= ⋅ + +

which is posynomial function of design variables. Therefore, we can impose an upper limit

on the power.

5.1.5 Gain specification

The gain of the opamp is given by

1 6

2 4 7 6

m m

ds ds ds ds

g gAg g g g

= ⋅+ +

(5.11)

Page 81: KIM

65

We specify our minimum gain specification as

minA A≥ (5.12)

which tranlslates into

1 2 4 6 7 6min min 1/ ( ) 1/ ( ) 1m ds ds m ds ds

A A g g g g g gA

= ⋅ ⋅ + ⋅ ⋅ + ≤. (5.13)

Since gds has a relatively large modeling error, we use the technique that takes into

account the modeling error uncertainty in this constraint as

1, 2, 4,

6, 7, 6,

min (1 ) 1/ ((1 ) (1 ) )

(1 ) 1/ ((1 ) (1 ) ) 1

p PWL p ds PWL n ds PWL

n PWL p ds PWL n ds PWL

A gm g g

gm g g

α β β

α β β

⋅ + ⋅ ⋅ + ⋅ + + ⋅ ⋅

+ ⋅ ⋅ + ⋅ + + ⋅ ≤ (5.14)

where , , and p n p nα α β β are maximum modeling error (%) of 1/gm and gds respectively

(the subscripts p and n denote PMOS and NMOS).

5.1.6 Phase margin and unity-gain bandwidth

The description of phase margin and unity-gain bandwidth specification as a posynomial

inequality is found in [10] and repeated here for completeness. Phase-margin is in general

associated with the poles in the system transfer function. Hence, we need to identify the

poles in the design before describing the phase margin specification.

Page 82: KIM

66

We note that there exist four parasitic poles in the signal path. The dominant pole

1p is given by

1 1,1

m MONv c

p gA C

= ⋅ . (5.15)

Since the inverse of the small signal gain 1/ vA is posynomial as shown in (5.13), 1p

is a posynomial because we use monomial model of 1mg and Cc is a design variable.

The first nondominant pole 2p stems from the capacitances at the output node. It is

given by

6 ,2

1 1

m MON c

c Ltot c Ltot

g CpC C C C C C

=+ +

where 1C and LtotC are defined as

1 6 , 2, 4, 2, 4,GS MON JD MON JD MON GD MON GD MONC C C C C C= + + + + ,

and

, 6, 7, 6, 7,Ltot L MON JD MON JD MON GD MON GD MONC C C C C C= + + + +

respectively. Note that 2p is not a posynomial, but its inverse 21/ p is a posynomial.

In other words, 2p is an inverse-posynomial.

The mirror pole 3p is given by

3,3

3 4, 1, 3, 1,,m MON

GS MON GS MON JD MON JD MON GD MON

gp

C C C C C=

+ + + +

which is inverse-posynomial as 2p .

Page 83: KIM

67

Lastly, the compensation pole is given by

6,4

1

m MONgpC

=

which is a inverse posynomial.

Given the pole descriptions, the phase margin of the opamp is given as

( )4

1PM arctan c

ci i

H jpωπ ω π

=

= − ∠ = − ∑

(5.16)

where unity-gain bandwidth cω is the frequency at which ( ) 1cH jω = . In most cases

we would like to give a lower bound to the phase margin, typically between 30° and

60° , in order to ensure the stability depending on a feedback configuration. The actual

constraint becomes the following form

4min

1

4min

1

4min

1

4min

2

PM arctan PM

arctan PM

PM

PM2

c

i i

c

i i

c

i i

c

i i

wp

wp

wp

wp

π

π

π

π

=

=

=

=

= − ≥∑

⇒ ≤ −∑

⇒ ≤ −∑

⇒ ≤ −∑

. (5.17)

In (5.17), we used the two reasonable approximations. First, we used arctan x x≈

although it is possible to develop more accurate convex approximation of arctan x [10].

This approximation is quite accurate when 25x ≤ ° . Second, we assumed

Page 84: KIM

68

1arctan

2c

pω π

because the phase response due to the dominant pole at cω ω= is

nearly 90° for a reasonably large open-loop gain. Given the approximations, by

realizing that the unity-gain bandwidth in the two-stage opamp can be approximated as

1,m MONc

gCc

ω =

which is a monomial function, one can easily verify that the phase margin specification in

(5.17) is a posynomial inequality, thus compatible with GP.

5.1.7 Model definition

Monomial and PWL models used in formulating the problem must be defined in the GP

as well. For monomial models, we can simply write them as monomial equalities. For

instance, the VOV8 monomial model used in (5.7) can be defined as following monomial

equality

3 0.6254 0.6589 0.6064 0.04178 8 81.0354 10OV DS DSV W L I V− −= × ⋅ ⋅ ⋅ ⋅ . (5.18)

For PWL models, since the PWL function is a maximum of m monomial functions,

we need to use the “max” function as the equivalent GP-compatible formulation. For

instance, in the gain specification such as (5.14) we employ PWL model for dsg such as

Page 85: KIM

69

1 2 2 4

1,...,max i i i ids i DS DS

i mg c W L I Vα α α α

== ⋅ ⋅ ⋅ ⋅

. (5.19)

The equivalent formulation of (5.19) is as follows

11 12 13 14

21 22 23 24

1 2 3 4

1

2

. . .

m m m m

ds DS DS

ds DS DS

ds m DS DS

g c W L I V

g c W L I V

g c W L I V

α α α α

α α α α

α α α α

≥ ⋅ ⋅ ⋅ ⋅

≥ ⋅ ⋅ ⋅ ⋅

≥ ⋅ ⋅ ⋅ ⋅ .

(5.20)

Clearly (5.20) is compatible with GP because it is a set of monomial inequalities5. Note

that in (5.20) we need m inequality constraints to write the equivalent formulation of

(5.19).

5.1.8 Optimization result

Table 5.1 shows the result of an optimization for the maximum gain-bandwidth (GB)

product. The GP-PWL column is the result from a GP optimization using PWL models

wherever possible. The SPICE-PWL column is the SPICE simulation based on GP-PWL

predicted variables. To compare with prediction errors caused by less-accurate monomial

5 The equivalency of (5.19) and (5.20) comes from the concept of generalized GP

described in section 1.1.5 and more specifically (1.14).

Page 86: KIM

70

Performance Measure Spec. GP-PWL SPICE-

PWL

GP-

MON

SPICE-

MON

Output swing [V] ≥ 1.4 1.4 1.46 1.55 1.51

Quiescent Power [mW] ≤ 0.3 0.3 0.294 0.3 0.3

Open-loop Gain [dB] ≥ 70 73.55 71.23 73.74 56.88

GB product [MHz] Maximize 81.66 77.92 141.2 145.7

Phase-Margin ≥ 60° 60° 65.47° 60° 62.73°

Table 5.1: GP predictions and corresponding SPICE simulations for both PWL and monomial based optimization.

device models, GP-MON is the GP-optimization that uses the same design constraints as

GP-PWL except monomial device models are used in all places. SPICE-MON is the

corresponding SPICE simulation. Note that all of the specifications are met in GP-PWL

while there is a large violation of gain specification in the GP-MON case.

Figure 5.1 illustrates the GP-prediction and SPICE simulation discrepancy in both

GP-PWL and GP-MON cases over different DC power constraints. GP-PWL achieves

significantly better modeling accuracy. Interestingly, GP-MON and the corresponding

SPICE simulation have larger GB-product than its GP-PWL counterpart. However,

Page 87: KIM

71

Figure 5.1: Optimal GB product from GP and SPICE

Figure 5.2: D.C. gain from GP and SPICE

Page 88: KIM

72

Figure 5.2 reveals that the higher GB-product is at the cost of significantly violating the

gain specification.

5.2 Integrated inductor design

A second example uses an integrated inductor as the design problem. A common goal in

inductor design is to maximize the quality factor for a given inductance with a lower

bound on the self-resonance frequency. The design problem can be cast as the following

,min

,min

,min

maximize

subject to

,

L

L L

sr sr

Q

Q Q

L Lreq

w w

=

≥ .

(5.21)

We create a new PWL model of 1/QL, 1/L, and 1/wres as a function of dout(outer

diameter),w(turn width),s(turn spacing),n(number of turns) and f(frequency). This

extends the models in [11] by including the frequency dependency which has a

significant impact on inductor behavior for circuits. The problem (5.21) can be rewritten

as the following GP

Page 89: KIM

73

Figure 5.3: Quality factor vs Inductance in GP and ASITIC

1,min

,min

,min

minimize

subject to (1/ ) 1

(1/ ) 1

(1/ ) 1

L

L L

req

sr sr

Q

Q Q

L L

w w

⋅ ≤

⋅ ≤

⋅ ≤ .

(5.22)

Because we use a PWL inductance model, we must impose the inequality to replace

L=Lreq. Since the optimizer tries to find the smallest possible inductance to maximize the

quality factor, L is driven to the lower bound, Lreq, and the inequality is therefore active.

In Figure 5.3, we show the GP-prediction and ASITIC simulation of the maximum

Page 90: KIM

74

quality factor versus inductance at f=1GHz. Again, significantly improved prediction

accuracy is observed in the GP that utilizes the PWL model.

5.3 Summary

In this chapter, we applied the PWL modeling method and several GP description

techniques to the optimization of two stage op-amp and integrated inductor designs.

Compared with the design based on monomial device model, we noticed that the PWL

device model and new description techniques enable significantly better agreement with

SPICE and ASITIC simulations.

Page 91: KIM

75

CHAPTER 6 Conclusion

This research work has presented several techniques to reduce errors in GP-based analog

circuit design optimization.

6.1 Contributions

First, this research work introduced a piecewise-linear function as an alternative of

posynomial function in geometric programming-based analog circuit optimization and

presented an obvious method that creates and refines device models that is compatible

with geometric programming. This method enables us to create accurate active and

passive device models in today’s technology with great accuracy.

Second, several improvements to the circuit formulation in GP are described. The

suggested GP description techniques enable us to bound prediction errors originating

from both bias estimation errors and modeling errors.

Page 92: KIM

76

6.2 Future work

Because this work provides very applicable and complete methodology of circuit

optimization, several interesting extensions can be made.

6.2.1 Device-level optimization

One possible area is to investigate the chances of optimizing the devices for a specific

analog circuit cell. Device design community has been reporting that transistor

performance can be improved significantly by modulating process parameters such as

doping concentration (NSUB) and oxide thickness (TOX), but analog and mixed-mode

circuit design community tends to use only very few types of transistors, i.e. core device

and/or I/O device, which are mostly fixed by the requirement of digital integrated circuit.

If we are able to develop a good GP-compatible device model that depends not only

on circuit design variables (i.e. W, L, IDS, and VDS) but also on process parameter

variables (i.e., NSUB and TOX), we will truly be able to push the performance of analog and

mixed-mode circuit to the extremes of the given technology by optimizing all possible

Page 93: KIM

77

variables that affect performances. Because previous literatures already showed that

many varieties of circuit can be optimized as GP, we can easily explore the possible

improvement of circuit as soon as we create this device model. If the improvement in this

approach, which we don’t know yet, turns out to be significant, it might revolutionize the

way analog and mixed-mode circuit design are currently designed.

6.2.2 Predicting the performance in future technology

This is also based on the process-dependent GP device model. Since the model has

process parameters as variables that are scalable, we can easily extrapolate the

performance of various mixed-mode circuits according to the future scaling scenario.

This prediction is also possible in current BSIM model-based approach, but it is only

applicable for the device characteristics but not for more complex circuit characteristics.

More concretely, in the case of BSIM models we may be able to predict how fast the ring

oscillator would run in the next technology, but it is much more difficult to predict how

much faster a pipelined A/D converter would operate while meeting all current design

constraints. GP-based optimization combined with process-dependent model is able to

Page 94: KIM

78

give a much more reliable prediction of the performances of various analog and mixed-

mode circuits in future technology.

Page 95: KIM

79

APPENDIX A Norm approximation in function fitting

In this thesis “norm” has been used to measure the errors in the fitting. This Appendix

gives a brief overview of norm approximation problem in function fitting. More complete

description can be found in [2].

The norm-approximation problem is unconstrained problem as

minimize Ax b−

where m nA ×∈ R and mb ∈ R are given data that we want to fit to, nx ∈ R is the variable,

and ⋅ is a norm on mR . A solution of the this problem can approximate Ax b≈ . It

is shown that the norm approximation problem is a convex problem and solvable [2].

A.1 L2-norm approximation

The most common norm approximation problem involves the Euclidean or L2-norm. By

Page 96: KIM

80

squaring the objective, we obtain an equivalent problem which is called the least-squares

approximation problem,

2 2 2 21 2minimize mAx b r r r− = + + ⋅⋅⋅+

where the objective is the sum of squares of the residuals. This problem can be solved

analytically by expressing the object as the convex quadratic function

( ) 2T T T Tf x x A Ax b Ax b b= − + .

A variable x minimizes f(x) if and only if

( ) 2 2 0,T Tf x A Ax A b∇ = − =

which always has a solution which is given by 1( )T Tx A A A b−= .

A.2 Linf and L1 norm approximation

The Linf-norm approximation is the following minimization problem

1 2minimize max{ , ,..., }mAx b r r r∞

− =

which is also called as Chebyshev approximation problem, or minmax approximation

problem, since we minimize the maximum (absolute value) residual. The Chebyshev

approximation problem can be cast as a following equivalent linear program,

Page 97: KIM

81

minimize subject to ,

tt Ax b t− ≤ − ≤1 1

with variables nx ∈ R and t ∈ R .

The L1-norm approximation is the following minimization problem

1 21minimize mAx b r r r− = + + ⋅⋅⋅+

which is called the sum of (absolute) residuals approximation problem. Like the

Chebyshev approximation problem, the L1-norm approximation problem can be cast as a

linear program,

minimize subject to ,

T tt Ax b t− ≤ − ≤

1

with variables nx ∈ R and mt ∈ R .

The difference among these three norms is where they put more emphasis in optimizing

the data fitting. L1-norm penalty puts the most weight on small residuals whereas Linf

norm puts the most weight on worst case residual. L2-norm is in between these two.

Besides these three norms shown above, one can develop particular penalty function

according to the requirement of the fitting. The penalty function approximation problem

has the form

1minimize ( ) ( )subject to

mr rr Ax bφ φ+ ⋅⋅⋅+

= −

Page 98: KIM

82

where :φ →R R is called the (residual) penalty function. Assuming that φ is convex,

the penalty function approximation problem can be cast as a convex optimization

problem.

Page 99: KIM

83

REFERENCES

[1] B. Antao, “Trends in CAD of analog ICs,” IEEE Circuits Devices Magazine, vol. 12,

no. 5, pp. 31-41, Sept. 1996.

[2] S. Boyd and L. Vendenberghe, Convex Optimization, Cambridge University Press,

2003.

[3] L. Carley, G. Gielen, R. Rutenbar, and W. Sansen, “Synthesis tools for mixed signal

ICs: Progress on front-end and back-end strategies,” in Proceedings of 33rd Annual

Design Automation Conference, 1996, pp. 298-303.

[4] D. Colleran, C. Portmann, A. Hassibi, C. Crusius, S. Mohan, S. Boyd, T. Lee and M.

Hershenson, “Optimization of Phase-Locked Loop Circuits via Geometric

Programming,” in Proceedings of IEEE Custom Integrated Circuit Conference,

2003, pp. 377 – 380.

[5] W. Daems, G. Gielen, and W. Sansen, “Simulation-Based Generation of

Posynomial Performance Models for the Sizing of Analog Integrated Circuits,”

Page 100: KIM

84

IEEE Transactions on Computer-Aided Design, vol. 22, No. 5, pp.517-534, May

2003.

[6] J. Fishburn and A. Dunlop, “TILOS: A posynomial programming approach to

transistor sizing,” in Proceedings of IEEE International Conference on Computer-

Aided Design, 1985, pp. 326-328.

[7] A. Hassibi and M. Hershenson, “Automated Optimal Design of Switched-Capacitor

Filters”, in Proceedings of Design, Automation and Test in Europe Conference and

Exhibition, 2002, pp. 1111.

[8] M. Hershenson, “CMOS Analog Circuit Design via Geometric Programming,“ Ph.D.

dissertation, Stanford University, 1999.

[9] M. Hershenson, “Design of pipeline analog-to-digital converters via geometric

programming”, in Proceedings of IEEE International Conference on Computer-

Aided Design, Nov. 2003, pp. 317-324.

[10] M. Hershenson, S. Boyd and T. Lee, “Optimal Design of a CMOS OpAmp via

Geometric Programming,” IEEE Transactions on Computer-Aided Design, vol. 20,

No. 1, pp. 1-21, Jan. 2001.

Page 101: KIM

85

[11] M. Hershenson, S. Mohan, S. Boyd, and T. Lee, “Optimization of Inductor Circuits

via Geometric Programming”, in Proceedings of 1999 Design Automation

Conference, Jun. 1999, pp.994-998.

[12] MOSEK, http://www.mosek.com.

[13] J. Kleinhans, G. Sigl, F. Johannes and K. Amtreich, “GORDIAN: VLSI placement

by quadratic programming and slicing optimization,” IEEE Transactions on

Computer-Aided Design, vol. 10, pp. 356-365, March 1991.

[14] J. Lee, J. Hatchter, and C. K. K. Yang, “Evaluation of Fully-Integrated Switching

Regulator for CMOS process technologies,” in Proceedings of 2003 International

Symposium on SOC, to appear.

[15] P. Maulik, L. R. Carley, and R. A. Rutenbar, “Integer programming based topology

selection of cell-level analog circuits,” IEEE Transactions on Computer-Aided

Design, vol. 14, pp. 401-412, Apr. 1995.

[16] P. Maulik, L. R. Carley, and D. J. Allslot, “Sizing of cell-level analog circuits using

constrained optimization techniques,” IEEE Journal of Solid-State Circuits, vol. 28,

pp. 233-241, Mar. 1995

Page 102: KIM

86

[17] A. Niknejad, R. G. Meyer, “Analysis, Design, and Optimization of Spiral Inductor

and Transformers for Si RF IC’s,” IEEE Journal of Solid-State Circuits, vol. 33, No.

10, pp. 1470-1481, Oct. 1998.

[18] F. Young, C. Chu, W. Luk, and Y. Wong, “Handling soft modules in general

nonslicing floorplan using lagrangian relaxation,” IEEE Transactions on Computer-

Aided Design, vol. 20, pp. 687-692, May 2001.


Recommended