Computing system roots and theirsensitivities
Narendra K. Jain, Ph.D., and Prof. Kishore Singhal, Eng.Sc.D., Sen. Mem.I.E.E.E.
Indexing terms: Algorithms, Digital computers and computation, Sensitivity, Poles and zeros
Abstract: The problem of determining system roots and their sensitivities is considered. Simple, general, practi-cal and reliable algorithms are proposed for their accurate and efficient computation. It is shown that rootsensitivities are computed virtually free of cost as a byproduct of the algorithms. Examples are provided todemonstrate applications.
1 Introduction
The behaviour of a linear time-invariant system is com-pletely determined by the location of its poles and zeros, orthe roots [1]. Variations in system parameter values leadto changes in the location of the roots, and, consequently,the system response. In many analysis and design opti-misation problems [2, 3] it is of interest to know the loca-tion of the roots and their variations with respect tochanges in parameter values. A commonly used quantita-tive measure for root variations is the sensitivity [4].Whereas the sensitivity determines the effect of small par-ameter variations, the root locus shows the effect of largeperturbations in a parameter value. It also gives us theapproximate values of a parameter at which the system isin a critical state.
The problem of determining roots is mathematicallyequivalent to the generalised eigenvalue problem [5]. Oneapproach for solving an eigenvalue problem is to find thezeros of the characteristic equation, i.e. the zeros of thedeterminant of the 'system-matrix' [6-9]. Although com-putation of the determinant is easy for a specified complexfrequency, evaluation of its derivative is an expensiveprocess, rendering use of quadratically convergentmethods impractical. Another equally accepted approach[10] is to use the Q/?-algorithm [11] to extract eigen-values. This implicitly requires that the generalised eigenproblem be first transformed to an equivalent standardeigen problem [12]. Similar reasoning applies to the QZ-algorithm [13], except that the linear eigen problem can besolved without transformation. Moreover, the QR and QZalgorithms cannot be used for continuum systems, forlarge sparse problems and for system analysis and designproblems where root computation is not the sole objective.
Papoulis [14] considered the problem of determiningvariations in simple and multiple system roots caused bysmall perturbation in parameter values using the charac-teristic equation and Taylor's expansion formula. Hisapproach, although simple, is computationally expensive,since derivatives of the determinant are required. Lancas-ter [6] gave a method for computing sensitivities of theroot or the eigenvalue from the knowledge of the eigen-value in question and the corresponding right and lefteigenvectors. Fox and Kapoor [15] and Plaut andHuseyin [16] described methods to compute higher-ordersensitivities for the linear eigenvalue problem. Recently,Cardani and Mantegazza [17] described a fairly generaltechnique for computing the first and higher-order
Paper 2668G, first received 20lh May 1982 and in revised form 13th June 1983
The authors are with the Department of Systems Design, University of Waterloo,Waterloo, Ontario, Canada N2L 3G1
178
sensitivities without the knowledge of the adjoint system.Pan and Chao [18] suggested a computer-aided techniquefor plotting the root loci for control-system problems. Thetechnique was based on the concept of continuation [19]in which the solution of an equivalent initial valueproblem is sought.
An integrated methodology for computing roots andtheir sensitivity was proposed recently by Singhal [20].The method appears to have superior performance com-pared to previous techniques and is applicable to discreteas well as continuum systems. It solves the root-findingproblem ingeniously by defining and reducing to zero ascalar function whose derivative is easily computed. It wasshown that the roots of the scalar function are eigenvaluesof the generalised eigenproblem. Singhal also outlined pos-sible applications of his method to tackle many designproblems, such as root-locus, stability boundaries andminimum sensitivity design with specified roots.
In this contribution, we present further work done inthis area. In Section 2, the equivalence of the root locationand eigenvalue problems is established. The fundamentalconcept which gives rise to a class of decompositionmethods is outlined in Section 3. The triangular, unitaryand singular value decomposition methods are discussedin Sections 4 through 6. An analytic example illustratingthe application of the triangular decomposition method isincluded in Section 7. An important design application, i.e.the root-locus plotting, is described and demonstratedthrough an example in Section 8.
2 Problem formulation
Consider the linear system of equations
F(s, h)w{h) = b{s, h) (1)
where ft is a vector of parameter values, F the 'systemcoefficient matrix', w the vector of unknowns, b the vectorof excitations and s the Laplace transform operator. Letthe 'output't be a linear combination of the components ofw, i.e.
t = c'w (2)
where c is a vector of constants and prime denotes trans-position. From eqns. 1 and 2, we write
t = c'F~lb
and use the reciprocity theorem [21] to obtain
Ft = det 'det F
(3)
(4)-c' 0,
IEE PROCEEDINGS, Vol. 130, Pt. B, No. 5, OCTOBER 1983
Then it follows that the zeros, s = z{, of t must satisfy
and poles, s = pit must satisfy
det (F) = 0
(5)
(6)
Observe that eqns. 5 and 6 are equivalent to solving thegeneralised eigenvalue problem
A(s, h)x(h) = 0 (7)
where the 'system-matrix' A is the matrix F for pole com-putations and the augmented matrix of eqn. 5 for zerocomputations. The vector x is known as the right eigen-vector of A. There also exists the left eigenvector y, suchthat
y*(h)A(s, A) = 0 (8)
where * denotes conjugate transposition. To determineroots, we can equivalently solve the characteristic equation
det A(s, h) = 0
Theoretical development
(9)
3.1 Basic conceptIt is undesirable to solve eqn. 9 iteratively, as it is expen-sive to compute derivatives of det A. The proposed decom-position methods are arrived at by solving a related scalarequation
</>(s, h) = 0 (10)
where zeros are the same as those of eqn. 9, and the deriv-ative of (j> is easily computed. In order to find such afunction, let
B(s, h) = 7(5, h)A(s, h)P(s, h) (11)
where T and P are permutation matrices. Their role will bedefined later. Now from the definition of the inverse of amatrix, we know that
B~l £ adjoint (B)/det B (12)
Let us consider the last diagonal elements of the abovematrix equality
e*B~leH = [e* adjoint (B) ej/det B (13)
Here, en is the nth column of the identity matrix, where n isthe dimension of B (or A). From eqn. 13,
det B = p<f)
where
p £ e*n adjoint (B) en
(14)
(15)
(16)
It follows from eqn. 14, that, at any eigenvalue of B (andtherefore that of A), either 0 or p must vanish. However, prepresents the cofactor of the last diagonal element of Bformed by the first n — 1 rows and columns, and, therefore,at simple roots, proper pivoting can help provide anonzero p. Let us assume that T and P provide such per-mutations. The eigenvalues of A can thus be obtained bysolving
d>(s, h) = (17)
It may be noted in passing that (f), being a function of theelements of A, represents an analytic function, and is there-fore differentiable.
3.2 Root computationOur objective is to use eqn. 17 instead of eqn. 9 for com-puting the eigenvalues. At first sight, it might look inap-propriate, as the expression for (f> involves A'1. However,as will be seen in the sequel, explicit determination of A ~l
is not required. Quadratic and higher-order convergentformulas require derivative(s) of (f> with respect to s. Forcomputing sensitivities of the eigenvalue derivatives arealso required with respect to parameters. We present ourresults for computing the lst-order derivatives of 4> in theform of the following theorem:
Theorem 3.1: Let / and g be two nonzero vectors, suchthat
Bf= <B*g =
(18)
(19)
where the overbar indicates the complex conjugate. Also,let $,s and </>,, represent partial derivatives of $ withrespect to s and the jth component of h, hj, respectively.Then
<t>,o=g*B,ef (20)
where 9 represents either s or the subscript j of hj.
Proof: We know that
B B 1 = I (21)
Differentiate it with respect to 6, and rearrange
B,e= -B(Bl),0B (22)
Pre- and post-multiply by#* and/ to obtain
g*B,ef= -g*B(BlXeBf (23)
Using eqns. 18 and 19, this simplifies to
g*B,ef= -4>2e*n{B-l\een = 0,e (24)
which is the required result. Our proof will be complete,provided we show that the vectors / and g exist, even atthe solution. Note from eqns. 15, 18 and 19 that, at thesolution, / and g are, respectively the right and left eigen-vectors of B and are normalised to have their last com-ponent unity. Also observe from eqns 15 and 18 that
or, from the reciprocity theorem [21],
/ , = detBa*
detB
-e
(25)
(26)
We now show that, at simple roots, the augmented matrixin the denominator of the right-hand side is always non-singular, even if B is singular. For this, let d be a vectorand a be a scalar, such that
B
-eand
(27)
(28)
Consider three different cases of inequality 28:(i) d ± 0 but a = 0
(ii) d = 0 but a =£ 0(iii) d ± 0 and a ^ 0
IEE PROCEEDINGS, Vol. 130, Pt. G, No. 5, OCTOBER 1983 179
(i) From eqn. 27,
(29)
which shows d as a right eigenvector of B with its lastelement zero. However, as stated earlier, at convergence, /represents the right eigenvector with /„ = 1. Therefore, dcannot be a multiple of/, unless it is the null vector. Butthis contradicts the assumption,(ii) Again, from eqn. 27,
aem = 0 (30)
or
<x =
However, this also contradicts the assumption,(iii) Once again, from eqn. 27,
Bd + aien = 0
e* d = 0
(31)
(32)
Premultiply eqn. 31 by g*, and note that the calculationsare carried out at convergence,
a = 0, (33)
which violates our assumption. The same steps can berepeated for the vector g.
These arguments, therefore, prove that the vectors/andg are defined even at the solution. Eqn. 17 can now besolved by any appropriate method for solving nonlinearscalar equations. For simplicity, we consider only theNewton-Raphson (NR) method. For a given h and aninitial estimate s0 for the root s (pole or zero), the suc-cessive approximations to s can be obtained as
si+l = sf — 0(sf, /r)/0,s(s,-, h) (34)
The sequence {sj is said to have converged if
and
|sI + 1 — S;\ < £ otherwise,
where e is a small positive number. Typically, if e isexpressed as negative powers of ten, then the power indi-cates approximately the number of correct significantdigits at convergence.
3.3 Sensitivity computationAt the root, eqn. 17 holds true. Differentiate it with respectto theyth component of h:
4>,j + <j),s s,j = 0 (36)
Hence, the differential sensitivity is given by
s,j = —(j),j/(f),s (37)
and the normalised sensitivity Ssh. can be obtained from the
following definitions [3]:
if hj i= 0 and s £ 0, or
if hj = 0 but s 0, ory (38)
if hj £ 0 but s = 0, or
if hj = 0 and s = 0.
180
Note that the partial derivatives in eqns. 37 and 38 are tobe evaluated at the solution s. Therefore, the onlyunknown is <£,_,•; <j>,s is available at convergence of the NRiteration 34. In most cases, 0,, can be computed virtuallyfor free. The matrix A}j is very sparse and, for problemsthat arise from physical systems, generally has, at most,four nonzero entries located as
(39)
(40)
Eqn. 20 for 6 = j then simplifies to
where a and n are vector equivalents of the permutationmatrices T and P, respectively.
Returning to exprs. 15, 18 and 19, it appears that thecomputation of the function 4> and vectors / a n d g requirethe inversion of the matrix B. This can, however, beavoided by the use of matrix decompositions which notonly reduce the overall cost, but also provide extra insightinto the techniques involved.
4 Triangular decomposition method
4.1 Root computationConsider the well known triangular decomposition of thematrix B (or A) as follows:
B(s, h) = T(s, h)A{s, h)P(s, h) = Us, h)U(s, h) (41)
where L is the unit lower triangular and U the uppertriangular matrix. Here T and P are commonly known asrow and column permutation matrices, respectively. Eqn.15, with the help of eqns. 41, yields
0 = l/^f/"1^"1^) (42)
Upon simplification, we obtain
or simply
<\> = unn (43)
Hence, from eqn. 17, the eigenvalues are obtained bysolving
s, h) 4 ujs, h) = 0 (44)
From eqns. 41, it is easy to see that det A necessarilyvanishes whenever «„„ reduces to zero as
(45)
where £ is an integer representing contribution due to Tand P. Eqns. 18 and 19 can also be simplified. For this,from eqns. 18 and 41, we obtain:
LUf=4>eH
or
Uf= (J)en
Also, eqns. 19 and 41 yield
U*L*g = 4>en
or
(46)
IEE PROCEEDINGS, Vol. 130, Pt. B, No. 5, OCTOBER 1983
or, finally,
L*g = en (47)
The above results can be summarised in the followingtheorem:
Theorem 4.1: L e t / a n d g be two nonzero vectors, suchthat
Uf= <beH
L*g = en
then
cj>,e=g*TA,ePf
(48)
(49)
(50)
4.2 Computational algorithmThe triangular decomposition algorithm can be sum-marised as:
(a) write the system equations 1, and obtain matrix A(b) select an initial estimate s0 of the desired root s. For
i = 0, 1, 2, ... do the following:(c form A(si)(d) decompose A into its triangular factors, eqn. 41, and
obtain (f> from eqn. 43(e) find vectors / a n d g by back solving eqns. 48 and 49(/) find (f),s from eqn. 50(g) improve the estimate using eqn. 34(h) repeat steps (c) through (g) until eqn. 35 is satisfied
for a given e(i) compute <p,j from eqn. 40(j) find differential and normalised sensitivities using
eqns. 37 and 38.
It is of interest to compare the computational cost ofevaluating derivatives of (j) and det A. Each of the steps (e)and (/) requires n2 operations. Hence, for a dense A, atotal of 2n2 operations are required to compute the deriv-ative of 4>. Recall that the lst-order derivative of the deter-minant requires at least n3 operations [6]. Also, here thevectors/and g can be used for computing derivatives withrespect to other arguments. An additional favourablefeature of the algorithm is that the matrix systems in eqns.48 and 49 are relatively well conditioned and remain valid,even at the exact solution.
4.3 Second derivatives of 0We give here the following theorem to compute the 2nd-order derivatives of 0, sometimes required in design opti-misation. The proof is omitted for brevity.
Theorem 4.2: Let b° and ce be two vectors, such that
= (TA,eP)f
(51)
(52)
Let bn and cn be another set of vectors satisfying eqns. 51and 52 if 9 is replaced by rj everywhere. Here 6 and rjrepresent s or the subscript j of hj. Then,
Kn = g*(TA,BnP)f- be*c« - b"*c6 (53)
where be* indicates the conjugate transpose of A0.With the help of the above theorem, the 2nd-order root
sensitivity can be determined by differentiating eqn. 36with respect to hk:
<l>>jk + <l>>jS S>k + 4>,s s>jk + (<f>>sk
and arranging the result as
S,k)S,j = 0,
Every derivative on the right-hand side is evaluated at theexact root location.
5 Unitary decomposition method
When the roots are required to very high accuracy thetriangular decomposition method may not be satisfactory.In such a case, the method of this Section can be used. Atany specified complex frequency s, the matrix B (or A) canbe decomposed into its unitary factors as
= AP = QR (55)
where P is the column-permutation matrix, Q the unitaryand R the right triangular matrix.
Eqn. 15, with the help of eqns. 55, yields
(56)
On simplification, we obtain
4> = rJ{e*Q*en)
or
= rjqnn (57)
(58)
Hence the eigenvalues are obtained by solving
(f>(s, h) = rnn(s, h)/qnn(s, h) = 0
Observe that, as Q is a unitary matrix, \qnn\ ^ 1, and,assuming qnn £ 0, the zeros of the scalar function (f) are thezeros of rnn and are, in turn, the roots of the characteristicequation 9.
Using the NR formula and performing the same anal-ysis, as in Section 4, roots and their sensitivities can bedetermined. The derivatives of 4> are computed from thefollowing theorem which is analogous to theorem 4.1:
Theorem 5.1: Let / and g be two nonzero vectors, suchthat
Rf= <l>
Q*g =
then
<t>,e=g*A,ePf
(59)
(60)
(61)
We omit the proof of the theorem for brevity. It is easy tosee that eqns. 59 and 60 follow from eqns. 18 and 19. Infact, from eqns. 18 and 55, we obtain:
QRf= 4>en
On premultiplying by Q*, this reduces to eqn. 59. Also,from eqns. 19 and 55, we get:
R*Q*g = faor
Further simplification, using eqn. 57, reduces this to eqn.60.
6 Singular-value decomposition method
Another method for very high accuracy solution requires B(or A) to be decomposed as
= A = UDV*
IEE PROCEEDINGS, Vol. 130, Pt. G, No. 5, OCTOBER 1983
(62)
181
where U and V are unitary matrices and D is a diagonalmatrix. The diagonal elements of D, du are the singularvalues of A. Eqns. 62 and 15 yield
(63)
Since both U and V are unitary, we get
The function 0 is computed by writing the above equationas
dja
wheren - l
("m ^ i
Assuming
(64)
(65)
(66)
at simple roots, as \uni\ ̂ 1 and |um-| ^ 1, a will never bezero or infinite and therefore zeros of 0 are zeros of dnn
which, in turn, are zeros of det A or the eigenvalues of A.The following theorem summarises the results for thederivative of 0:
Theorem 6.1: Let / and g be two nonzero vectors, suchthat
/ = Vb (67)
g = Uc (68)
where b and c are found from the following equations:
Db = (f)U*en (69)
n (70)
(71)
Then,
Eqns. 67-71 can be derived from theorem 3.1 and eqns.62-66. We omit the derivation for brevity.
Although the unitary and singular value decomposi-tions are relatively stable and accurate, they are at leasttwice as costly as triangular factorisation. Consequently,methods employing these factorisations are appropriateonly for small or intermediate size problems.
7 An analytic example
We illustrate the triangular decomposition method by con-sidering the network shown in Fig. 1. The circuit equationsare
-G-G
sC2 +(72)
out
Consider the pole
s= -G(Cl + C2)/CiC2 (73)
Clearly its normalised sensitivities with respect to G, Cx
and C2 are 1, -C2I{C1 + C2) and -CJiC^ + C2), respec-tively.
We now apply our method to find this pole and itssensitivities. The same analysis can be repeated for otherroots.
(a) A =Ct 00 C2
(74)
(75)
• i + G — G
-G sC2 +
(b) Let h = (G, Clt C2)
(c) With T and P as identity matrices, the triangularfactors are, eqn. 43,
L =
U =-G
s2ClC2 + sG{C1 + C2)
{d) Eqn. 45 yields
<\>^u22 = [s2ClC2 +
Clearly s in eqn. 73 is a zero of <p.
(e) Eqns. 48 and 49 yield
g=f=[G/(sCl + G) 1]
(/) Eqn. 50 yields
<b,, = IC^sC, + G)2 + C.G'MsC, + G]2
(g) Use eqn. 42 to obtain
0S1 = s2C2/(sCl + G)2
G)
,2 = sG2/(sCl + G)2= s
(76)
(77)
(78)
(79)
(80)
(h) Evaluate derivatives in eqns. 79 and 80 at s = S
0,f = C2{C, + CaJ/d <btl = ( C t + C2)2/C2
0,3= -G(Ci + C2)/C1C2
(i) Eqn. 40, after simple manipulations, yields
(81)
S*C2= -CJ{C, + C2) (82)
which are the required results.
8 Root-locus plotting
Let a parameter a (from h) be allowed to vary from itsnominal value. As a result, the roots are perturbed fromtheir original locations. The trajectory of the rootsobtained as a result of variations in a parameter value iscommonly known as the root locus. Hence, the locus con-sists of a set £ defined as
±{s\det A{s, a) = 0} (83)
Fig. 1 Simple RC circuit
Although the above equation represents a path in the 3-dimensional (s — a) space, the plotting is usually carriedout in the s-plane. The root locus consists of a set ofbranches with each branch representing the locus of a
182 IEE PROCEEDINGS, Vol. 130, Pt. B, No. 5, OCTOBER 1983
simple root. The points where two or more branches meetare known as multiple root points.
If A is decomposed into its triangular factors usingappropriate pivoting, then it follows from Section 4 thateqn. 83 represents the set / defined as
a) = (84)
Clearly, in addition to eqns. 84, some other conditionsmust be satisfied at multiple-root points. These conditionscan be derived from the fact that for a root with multiplic-ity v, det A and its v — 1 derivatives must reduce to zero.For example, at the double-root points, in addition toeqns. 84, the following condition must also be satisfied:
«) = 0 (85a)
± (u
or
• n - l . n - 1 (S5b)
Thus, at double-root points, 0 may only have a simpleroot. Hence, the problem is to trace the locus defined byeqns. 84 and 85 for the double roots etc., starting from thegiven simple root points (sio, a0) satisfying
#s l o , a o ) = 0 Vi (86)
Use of the continuation method requires that this problembe first converted into an equivalent initial value problem.Dropping the subscript i for simplicity and using eqn. 37,we write (at simple-root points)
Sa = —0»a/0»s (8')
Eqn. 87, along with eqn. 86, constitutes an initial valueproblem. Note that the derivatives of <p are available fromtheorem 4.1.
A predictor-corrector method to integrate the aboveinitial value problem is the most appropriate method, sinceit is able to control automatically the local truncationerror made in the calculation and then to reduce the stepsize [18]. Singhal [20] observed that the Ni?-method beused to refine the approximations supplied by the predic-tor to eliminate the local truncation error. This improvesthe performance of the method significantly. The step sizeis then controlled by the number of corrector iterations.
Using the above technique, the double roots can belocalised precisely with the help of eqns. 84 and 85. If 4>has a simple root at these points, the sensitivity can still becalculated from eqn. 37, otherwise alternate expressionscan be derived, following the work of Papoulis [14].
We demonstrate the application of this technique toobtain the locus of the poles of the output V2 of thenetwork shown in Fig. 2. Observe that the poles of V2
satisfy
det5 + 2
j (88)
This equation is similar to eqn. 6 and can be considered as
1 V 3 V 3 V2
Fig. 2 Active circuit
1EE PROCEEDINGS, Vol. 130, Pt. G, No. 5, OCTOBER 1983
an eigen problem. The forward Euler and Newton formu-las are used as predictor and corrector, respectively. Thecomputer plot of the locus for 0 ^ a ^ 6 is shown in Fig. 3.
-2-1 0 1
real part of s-3 -2
Fig. 3 Root locus for network of Fig. 2
The double roots are located for a = 1 and 5. The execu-tion time (including plotting) was 0.8 s on an IBM 4341.
9 Conclusions
Methods based on matrix decomposition were proposedfor efficient and accurate computation of system roots andtheir sensitivities. The triangular decomposition method isappropriate for large problems, since matrix sparsity canbe exploited. An example demonstrating application toroot-locus plotting is considered.
10 Acknowledgment
This work was supported through grants from the NaturalSciences and Engineering Research Council of Canada.
11 References
1 FRAZER, R.A., DUNCAN, W.J., and COLLAR, A.R.: 'Elementarymatrices and some applications to dynamic and differential equations'(Cambridge University Press, 1963)
2 TOMOVIC, R.: 'Sensitivity analysis of dynamic systems' (McGraw-Hill, New York, 1963)
3 VLACH, J., and SINGHAL, K.: 'Sensitivity minimization of net-works with operational amplifiers and parasitics', IEEE Trans., 1980,CAS-27, pp. 688-697
4 MITRA, S.K.: 'Analysis and synthesis of linear active networks' (JohnWiley, New York, 1969)
5 KAUFMAN, I.: 'On poles and zeros of linear systems', IEEE Trans.,1973, CT-20, pp. 93-101
6 LANCASTER, P.: 'Lambda-matrices and vibrating systems'(Pergamon Press, Oxford, 1966)
7 CALAHAN, D.A., and McCALLA, W.J.: 'Eigenvalue methods forsparse matrices', in ROSE, D.J., and WILLOUGHBY, R.A. (Eds.)'Sparse matrices and their applications' (Plenum, New York, 1972),pp. 25-30
8 WONG, Y.M., and POTTLE, C : 'On the sparse matrix computationof critical frequencies', IEEE Trans., 1976, CAS-23, pp. 92-95
9 PAPATHOMAS, G.V., and WING, O.: 'Sparse Hessenburgreduction and the eigenvalue problem for large sparse matrices, ibid.,1976, CAS-23, pp. 739-744
10 KAUFMAN, I.: 'On poles and zeros of very large and sparsesystems'. Proceedings IEEE ISC AS, San Francisco, CA, 1974, pp.510-514
11 FRANCIS, J.G.F.: 'The QR transformation—Pt. I', Comput. J., 1961,4, pp. 265-271
12 WILKINSON, J.G.: 'The algebraic eigenvalue problem' (Oxford Uni-versity Press, 1965)
13 MOLER, C.B., and STEWART, G.W.: 'An algorithm for generalizedmatrix eigenvalue problems', SIAM J. Numer. Anal., 1973, 9, pp.241-256
183
14 POPOULIS, A.: 'Perturbations of the natural frequencies and eigen-vectors of a network', IEEE Trans., 1966, CT-13, pp. 188-195
15 FOX, R.L., and KAPOOR, U.P.: 'Rates of change of eigenvalues andeigenvectors', AIAA J., 1968, 6, pp. 2426-2429
16 PLAUT, R.H., and HUSEYIN, K.: 'Derivatives of eigenvalues andeigenvectors in non-self-adjoint systems', ibid., 1973,11, pp. 250-251
17 CARDAN1, C , and MANTEGAZZA, P.: 'Calculation of eigenvalueand eigenvector derivatives for algebraic flutter and divergence eigen-problems', ibid., 1979,17, pp. 408-412
18 PAN, C.T., and CHAO, K.S.: 'A computer-aided root-locus method',IEEE Trans., 1978, AC-23, pp. 856-860
19 WASSERSTORM, E.: 'Numerical solution by the continuationmethods', SI AM Rev., 1973,15, pp. 89-119
20 SINGHAL, K.: 'Generalized eigenvalue problem for large sparsesystems'. Int. symposium on large engineering systems, Waterloo,Canada, 1978, pp. 315-319
21 TURNBULL, H.W., and AITKEN, AC: 'An introduction to thetheory of canonical matrices' (London, England, 1950), pp. 161-162
K. Singhal received the B.Tech (Hons.) degree from theIndian Institute of Technology, Kharagpur, India, in 1966and the M.S. and Eng.Sc.D. degrees from Columbia Uni-versity, New York, in 1967 and 1970, respectively. Since1970, he has been with the University of Waterloo, and iscurrently Professor and Associate Chairman for graduatestudies in the Department of Systems Design. He is theacting director of the Pattern Analysis and Machine Intel-ligence group of the Institute for Computer Research. Hisresearch interests include computer-aided circuit analysisand design, numerical methods, tolerance analysis anddesign, and VLSI design aids. He is co-author of the recenttext 'Computer methods for circuit analysis and design'published by Van Nostrand Reinhold. Dr. Singhal is amember of Sigma Xi, the Association of Professional Engi-neers of Ontario and is a Senior Member of the IEEE.
Narendra K. Jain was born inJodhpur, India, in 1954. He receivedthe B.E. (Hons.) degree in electronicsand communication engineeringfrom the University of Jodhpur in1976, and the Ph.D. degree insystems-design engineering from theUniversity of Waterloo, Waterloo,Ontario in 1982.
From 1976 to 1978 he worked asa faculty member of the Electrical
Engineering Department of the University of Jodhpur.From 1979 to 1981 he worked as a teaching assistant inthe Systems Design Department of the University ofWaterloo. In May 1982, he joined Bell Northern Research,Ottawa, and subsequently transferred to NorthernTelecom, Ottawa. Since then he has been working as amember of scientific staff in the Semiconductor Com-ponent Group of Northern Telecom Electronics Limited,Ottawa.
Dr. Jain has research interests in all areas of computer-aided design and numerical methods.
184 IEE PROCEEDINGS, Vol. 130, Pt. B, No. 5, OCTOBER 1983