+ All Categories
Home > Documents > A variational algorithm for electrical impedance...

A variational algorithm for electrical impedance...

Date post: 20-Aug-2018
Category:
Upload: dangnhu
View: 217 times
Download: 0 times
Share this document with a friend
13
Inverse Problems 14 (1998) 1513–1525. Printed in the UK PII: S0266-5611(98)91222-9 A variational algorithm for electrical impedance tomography Ian Knowles Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294, USA Received 2 February 1998 Abstract. The problem of computing the coefficient function p in the elliptic differential equation ∇· (p(x)u) = 0, x R n , n > 2, over a bounded region , from a knowledge of the Dirichlet–Neumann map for this equation, is of interest in electrical impedance tomography. A new approach to the computation of p involving the minimization of an associated functional is presented. The algorithm is simple to implement and robust in the presence of noise in the Dirichlet–Neumann data. 1. Introduction Let R n be an open simply-connected bounded set with a C 1,1 boundary and let p L () satisfy, for some constant ν , the uniform ellipticity condition p(x) > ν> 0 x . (1.1) Consider for n > 2 the elliptic equation A p u = -∇ · (p(x)u) = 0 x R n . (1.2) For φ H 1/2 (∂) there is a unique u H 1 () satisfying (1.2) and the Dirichlet boundary condition u| = φ. (1.3) For each p satisfying the conditions described above, we can define a Dirichlet–Neumann map, 3 p : H 1/2 (∂) H -1/2 (∂), by 3 p (φ) = p ∂u ∂n (1.4) where p∂u/∂n| denotes the conormal derivative of u at the boundary. This map is self- adjoint, and both bounded and invertible, at least for p smooth enough [18, p 559]. We are interested in the corresponding inverse problem: given 3 p for some p, find p. It is known that for p smooth enough (see [19] and the references therein) p is uniquely determined by 3 p ; for general p L () satisfying (1.1) uniqueness is as yet unproven, but widely believed to be true nonetheless. This inverse problem is of considerable practical interest in the general area of non- invasive imaging and non-destructive testing. In electrical impedance tomography, for example, impedance imaging systems (see, for example, [2, 21]) apply currents to the surface of a body, measure the resulting voltages on the surface, and from this information attempt 0266-5611/98/061513+13$19.50 c 1998 IOP Publishing Ltd 1513
Transcript

Inverse Problems14 (1998) 1513–1525. Printed in the UK PII: S0266-5611(98)91222-9

A variational algorithm for electrical impedancetomography

Ian KnowlesDepartment of Mathematics, University of Alabama at Birmingham, Birmingham, AL 35294,USA

Received 2 February 1998

Abstract. The problem of computing the coefficient functionp in the elliptic differentialequation∇ · (p(x)∇u) = 0, x ∈ � ⊂ Rn, n > 2, over a bounded region�, from a knowledge ofthe Dirichlet–Neumann map for this equation, is of interest in electrical impedance tomography.A new approach to the computation ofp involving the minimization of an associated functionalis presented. The algorithm is simple to implement and robust in the presence of noise in theDirichlet–Neumann data.

1. Introduction

Let � ⊂ Rn be an open simply-connected bounded set with aC1,1 boundary and letp ∈ L∞(�) satisfy, for some constantν, the uniform ellipticity condition

p(x) > ν > 0 x ∈ �. (1.1)

Consider forn > 2 the elliptic equation

Apu = −∇ · (p(x)∇u) = 0 x ∈ � ⊂ Rn. (1.2)

For φ ∈ H1/2(∂�) there is a uniqueu ∈ H1(�) satisfying (1.2) and the Dirichlet boundarycondition

u|∂� = φ. (1.3)

For eachp satisfying the conditions described above, we can define a Dirichlet–Neumannmap,3p : H1/2(∂�)→ H−1/2(∂�), by

3p(φ) = p∂u∂n

∣∣∣∣∂�

(1.4)

wherep∂u/∂n|∂� denotes the conormal derivative ofu at the boundary. This map is self-adjoint, and both bounded and invertible, at least forp smooth enough [18, p 559]. We areinterested in the corresponding inverse problem: given3p for somep, find p. It is knownthat for p smooth enough (see [19] and the references therein)p is uniquely determinedby 3p; for generalp ∈ L∞(�) satisfying (1.1) uniqueness is as yet unproven, but widelybelieved to be true nonetheless.

This inverse problem is of considerable practical interest in the general area of non-invasive imaging and non-destructive testing. In electrical impedance tomography, forexample, impedance imaging systems (see, for example, [2, 21]) apply currents to the surfaceof a body, measure the resulting voltages on the surface, and from this information attempt

0266-5611/98/061513+13$19.50c© 1998 IOP Publishing Ltd 1513

1514 I Knowles

to reconstruct the electrical impedance in the interior of the body. In the above notation,the functionp corresponds to conductivity inside the body, the conormal derivative ofu

corresponds to currents applied at the surface, and the functionφ corresponds to measuredsurface voltages, so the measured data correspond to knowing (some approximation to) theinverse mapping3−1

p . In the case of the human body, as the various organs have differentconductivities (see, for example, the table in [3, p 152]), one can in theory construct animage of the interior.

While the problem has generated much interest (and a sizeable literature, see, withno claim as to completeness, [3, 6–8, 10–12, 14–17, 22, 27–30], and references therein), todate the search for a stable accurate and efficient reconstruction algorithm is still ongoing[5, p 209]. Given the ill-posed nature of the problem of recovering interior informationfrom boundary data, one expects that there will be severe resolution limitations; the goal isto determine algorithms that are close to optimal under these circumstances.

Our objective here is to present a new approach to this reconstruction that shows promise.The algorithm is stable both in the presence of noise and high iteration counts.

The basic idea is as follows. LetP ∈ L∞(�) be the function to be reconstructed,and assume that3P is known. Let{φi : i = 1, 2, . . .} be a basis ofH1/2(∂�), and forp ∈ L∞(�) let up,i be the solution of (1.2) and (1.3) withφ = φi ; also, let up,i be thesolution of (1.2) satisfying the Neumann condition

p∂u

∂n

∣∣∣∣∂�

= 3P (φi) (1.5)

and such that

up,i(x0) = up,i(x0) (1.6)

at some fixedx0 ∈ �; the latter condition ensures thatup,i is defined uniquely. Forp in

DG = {p ∈ L∞(�) : p satisfies(1.1) and(P − p)|∂� = 0}define

G(p) =∞∑i=1

γi

∫�

p|∇(up,i − up,i)|2 (1.7)

where theγi > 0 are chosen so that the series converges. We show that forh ∈ L∞(�)with h|∂� = 0,

G′(p)[h] =∫�

h

∞∑i=1

γi(|∇up,i |2− |∇up,i |2) (1.8)

and, furthermore, thatG(p) > 0, andG(p) = 0 if and only if p = P . So, in theory atleast, one can recoverP by minimizingG. We present a preconditioned conjugate gradientapproach to effect this reconstruction.

We note in passing that the functional (1.7) has essentially the same form (modulonotation changes) as that used in Kohn and McKenney [14], while Wexleret al [28] replacethe factorp in the integrand of (1.7) byp2. The major difference between the approachused in these papers and that of the present work is that we treat the dependence in (1.7) ofthe termsup,i and up,i on p directly, while in [14, 28] these terms are regarded as separatevariables in the functional for the purposes of iteration or minimization. As will be seenlater, these two approaches, while theoretically equivalent, appear to have quite differentnumerical behaviour.

A variational algorithm for electrical impedance tomography 1515

2. Properties of G

Consider first the following lemma.

Lemma 2.1. For φ ∈ H1/2(∂�) and p ∈ DG let up denote the solution of (1.2) and(1.3) and letup denote the solution of (1.2) satisfying the Neumann conditionp∂u/∂n|∂� =3P (φ), and set

Gφ(p) =∫�

p|∇(up − up)|2.ThenGφ is Gateaux differentiable onDG and forh ∈ L∞(�) with h|∂� = 0,

G′φ(p)[h] =∫�

h(|∇up|2− |∇up|2). (2.1)

Proof. By subtracting

−∇ · (p∇up) = 0 (2.2)

−∇ · ((p + εh)∇up+εh) = 0 (2.3)

we obtain

Ap(up+εh − up) = ε∇ · (h∇up+εh) (2.4)

and hence

(up+εh − up)(x) = ε∫�

GD(x, y)∇ · (h(y)∇up+εh) dy

whereGD denotes the Green function for the (positive) homogeneous Dirichlet operatorApon�. Consequently

∇(up+εh − up)(x) = ε∫�

∇xGD(x, y)∇ · (h(y)∇up+εh) dy. (2.5)

By a similar argument

∇(up+εh − up)(x) = ε∫�

∇xGN(x, y)∇ · (h(y)∇up+εh) dy (2.6)

whereGN denotes the Green function for the analogous homogeneous Neumann operator,Ap, on�. Then

(Gφ(p + εh)−Gφ(p))/ε =∫�

h|∇(up+εh − up+εh)|2

+1

ε

∫�

p∇(up+εh − up) · ∇(up+εh + up − up+εh − up)

−1

ε

∫�

p∇(up+εh − up) · ∇(up+εh + up − up+εh − up)

=∫�

h|∇(up+εh − up+εh)|2+∫�

∇ · (h(y)∇up+εh)

×∫�

p(x)∇xGD(x, y) · ∇(up+εh + up − up+εh − up) dx dy

−∫�

∇ · (h(y)∇up+εh)

×∫�

p(x)∇xGN(x, y) · ∇(up+εh + up − up+εh − up) dx dy

1516 I Knowles

from (2.5) and (2.6). Now for fixedy, and integrating by parts∫�

p(x)∇xGD(x, y) · ∇(up+εh + up − up+εh − up) dx

= −∫�

∇ · (p(x)∇(up+εh + up − up+εh − up))GD(x, y)dx

+∫∂�

p(x)GD(x, y)∂

∂n(up+εh + up − up+εh − up) dx

= ε∫�

GD(x, y)∇ · (h∇(up+εh − up+εh)) dx

from (2.2) and (2.3), and the fact thatGD(x, y) = 0 for x ∈ ∂�. Also, as(∂GN(x, y)/∂nx) = 0 for x ∈ ∂�,∫�

p(x)∇xGN(x, y) · ∇(up+εh + up − up+εh − up) dx

= −∫�

(up+εh + up − up+εh − up)(x)∇ · (p(x)∇xGN(x, y))dx

= (up+εh + up − up+εh − up)(y)so that if we letε → 0, and note thath|∂� = 0,

G′φ(p)[h] =∫�

h(y)|∇(up − up)|2 dy − 2∫�

(up − up)∇ · (h(y)∇up) dy

=∫�

h(|∇up|2− |∇up|2)as required. �

Some of the more useful properties of the functionalG are summarized in the followingtheorem.

Theorem 2.2. LetG be as defined in (1.7).(a) Forp ∈ DG,G(p) > 0, and, assuming that uniqueness holds for the inverse problem,

G(p) = 0 if and only ifp = P .(b) The functionalG is Gateaux differentiable onDG and forh ∈ L∞(�) with h|∂� = 0,

G′(p)[h] =∫�

h

∞∑i=1

γi(|∇up,i |2− |∇up,i |2). (2.7)

Proof. ThatG(p) > 0 is clear, and ifG(p) = 0 then∇(up,i − up,i) = 0 on� for all i,so that eachup,i − up,i is constant on�; as up,i must be equal toup,i at one point, theseconstants must all be zero. It follows from this that3p = 3P , and hence by uniquenessthatp = P . Part (b) follows directly from lemma 2.1. �

We note in passing that the Gateaux derivatives computed above are also Frechetderivatives, but we omit the details. One can also show that the second derivative ofGφ is given by

G′′φ(p)[h, k] = 2{(A−1p (∇ · (h∇up)),∇ · (k∇up))− (A−1

p (∇ · (h∇up)),∇ · (k∇up))} (2.8)

for h, k ∈ L∞(�) with h|∂� = k|∂� = 0, where(·, ·) denotes the usual inner product inL2(�), with an analogous expression forG′′(p).

The following convergence result is also of interest. For sequencesγ = (γ1, γ2, . . .)

andβ = (β1, β2, . . .) satisfying|βi | 6 γi we write |β| 6 γ .

A variational algorithm for electrical impedance tomography 1517

Theorem 2.3. Assume that� is a bounded connected extension domain and let{pn} be abounded sequence inDG, the elements of which each satisfy (1.1) uniformly. Assume alsothat γi > 0 for all i. If G(pn)→ 0 asn→∞ then

‖3P −3pn‖γ → 0 asn→∞ (2.9)

where

‖3P −3pn‖γ = sup{φ:φ=∑βiφi ,|β|6γ }

‖(3pn −3P )φ‖H−1/2(∂�).

Proof. From the fact thatpn > ν > 0 for all n we have that

∞∑i=1

γi

∫�

|∇(upn,i − upn,i)|2→ 0 asn→∞. (2.10)

Next, if x0 is the special point in condition (1.6), defineT ∈ H1(�)∗ by T (u) = u(x0),and an associated projectionL from H1(�) to the set of constant functions by takingLuto be the constant function with valueu(x0). Then from [31, theorem 4.2.1] we have thePoincare-type inequality for functionsv in H1(�):

‖v − L(v)‖0 6 C‖T ‖T (1)‖∇v‖0 (2.11)

whereC = C(�). If we setv = upn,i − upn,i , then it follows from (1.6) thatL(v) = 0 andhence from (2.11) that for some constantK and eachn, i,

‖upn,i − upn,i‖0 6 K‖∇(upn,i − upn,i)‖0.

It follows from (2.10) that

∞∑i=1

γi‖upn,i − upn,i‖21→ 0 asn→∞. (2.12)

From standard theory on Sobolev embeddings (see, for example, [9, theorem 1.5.1.3]) weknow that there is a unique continuous (and invertible) trace operatort from H1(�) ontoH1/2(∂�). Consequently, for some constantM,

‖t (upn,i − upn,i)‖H1/2(∂�) 6 M‖upn,i − upn,i‖1

where

t (upn,i − upn,i) = (I −3−1pn3P )φi.

From (2.12) we then have

∞∑i=1

γi‖(I −3−1pn3P )φi‖2

H 1/2(∂�)→ 0 asn→∞. (2.13)

As the sequence{pn} is bounded, the norms‖3pn‖γ are uniformly bounded, and from3pn −3P = 3pn(I −3−1

pn3P ) we have that

∞∑i=1

γi‖(3pn −3P )φi‖2H−1/2(∂�)

→ 0 asn→∞. (2.14)

The result (2.9) now follows. �

1518 I Knowles

It is known [1] that for coefficientsp, P smooth enough

‖P − p‖L∞(�) 6 Cw(‖3p −3P ‖1/2,−1/2)

where

w(t) =(

1

− log t

)δ0< t < 1

and 0< δ < 1. While this result is unlikely to be true for generalL∞ coefficents, itmay be possible to prove a result like this for the‖ · ‖γ norm with ‖P − p‖L1(�) replacing‖P −p‖L∞(�); in this event we would then have anL1 convergence theorem for the abovemethod.

3. Implementation and results

Practical data consist of a finite collection of approximate voltage–current pairs{(φi, ψi) :16 i 6 m}, so we minimize

Gm(p) =m∑i=1

∫�

p|∇(up,i − up,i)|2.

Here we have setγi = 1 for 16 i 6 m, andγi = 0 for i > m.If one intends to minimizeGm by some descent method, it is important to observe that

theL2 gradient,

∇Gm(p) =m∑i=1

(|∇up,i |2− |∇up,i |2)

in general is non-zero on the boundary of�, and is thus unsuitable as an update directionfor p because we require that the values ofp on the boundary remain unchanged so thatthe updatedp still lies in DG. One can, however, use a Neuberger gradient (see [20]),∇NGm(p), defined in this situation by

(∇NGm(p), h)1 = G′m(p)[h]

for h ∈ H10(�), where (·, ·)1 denotes the usual inner product inH1(�). If we set

g = −∇NGm(p) it is not hard to see via an integration by parts thatg can be computed bysolving the following Dirichlet problem:

−1g + g = −∇Gm(p) (3.1)

g|∂� = 0. (3.2)

Thus, not only is this gradient zero on the boundary of the given region�, but g =(1 − I )−1∇Gm(p), so that it is a preconditioned version of the originalL2 gradient.It can be shown, using an elliptic regularity estimate of DeGiorgi, that forp ∈ DG,∇NGm(p) ∈ L∞(�) (see [13]), so that whenp ∈ DG, the descent updates ofp alsolie in DG.

For a given choice of the initialp0 one could now use steepest descent, beginning withthe direction−∇NGm(p0), together with a one-dimensional line search routine, to minimizeG. In practice, one gets faster (by, approximately, a factor of two) convergence with thefollowing adaptation of the standard Polak–Ribiere conjugate gradient scheme [24, p 304].The initial search direction ish0 = g0 = −∇NGm(p0). At pi one uses the approximate

A variational algorithm for electrical impedance tomography 1519

line search routine to minimizeGm(p) in the direction ofhi , resulting inGm(pi+1). Thengi+1 = −∇NGm(pi+1) andhi+1 = gi+1+ γihi where

γi = (gi+1− gi, gi+1)1

(gi, gi)1= (gi+1− gi,∇Gm(pi+1))

(gi,∇Gm(pi))

by (3.1) and an integration by parts, where(·, ·) and(·, ·)1 denote the usual inner productsin, respectively,L2(�) andH1(�). The approximate line search procedure is implementedusing bracketing and Brent minimization (see [24]) with all two-dimensional integralscomputed as iterated one-dimensional integrals using Simpson’s rule.

This problem is known to be seriously ill-posed. If one attempts a direct minimization,the ill-posedness manifests itself in the computedp tending to become negative in places,causing the elliptic solvers to become unstable. We control this by simply truncating, at theend of each descent step, all computedp-values below a certain predetermined cut-off value.One is often justified in doing this on physical grounds by the presence of a ‘background’value for p which is known accurately. One is thus making the algorithm conditionallywell-posed by imposing an additional restriction, a time honoured method of stabilizing anill-posed problem [23]. While this modification means that we have an iterative, ratherthan a descent, method, the overall effect is that one still descends, but more slowly, andwith great stability as the examples in figures 3 and 4 below indicate. In the examples, thecut-off value was always chosen to be 0.5.

For the test region we chose� = [−1, 1] × [−1, 1]. The algorithm was tested onsynthetic data obtained by using forP the following functions defined on [−1, 1]× [−1, 1]:

P1(x, y) ={

2 if |x| < 0.5 and|y| < 0.5

0.5 otherwise

P2(x, y) =

−6y + 5 if |x| 6 y and 0.56 y 6 0.75

6x + 5 if x 6 |y| and−0.756 x 6 −0.5

6y + 5 if y 6 |x| and−0.756 y 6 −0.5

−6x + 5 if |y| 6 x and 0.56 x 6 0.75

2.0 if |x| < 0.5 and|y| < 0.5

0.5 otherwise

P3(x, y) =

2 if |x + 0.5| < 0.25 and|y − 0.5| < 0.25

1.5 if |x − 0.5| < 0.25 and|y − 0.5| < 0.25

1.0 if |x| < 0.25 and|y + 0.5| < 0.25

0.5 otherwise

P4(x, y) ={

2.0 if |y| < x + 0.95 and|x − y| < 0.95

0.5 otherwise.

The test data(3−1Piψj , ψj ), i = 1, 2, 3, 4, 1 6 j 6 m, were constructed by solving the

equation (1.2) withp = Pi and using as Neumann data basis functions formed fromthe functionsψrs(x, y) = (xrys)|∂� for r, s 6 N , for someN (generallyN = 2 in thecomputations below). As the typicalp is assumed to be constant in a neighbourhood of theboundary, the usual necessary condition on the corresponding Neumann data is that theymust integrate to zero over∂�; consequently, the basis set was adjusted to make this so.

1520 I Knowles

For example whenN = 2, the basis set was chosen to be the restrictions to the boundaryof the functionsx, y, x2 − 2

3, xy, y2 − 2

3. Admittedly, it is almost certain that this choiceof basis functions is not optimal; it is, however, convenient and the possibility of findingoptimal choices is an interesting open question (see [4] for related work on adaptive changein the prescribed boundary currents to ensure ‘maximal sensitivity’).

The various elliptic problems were solved by using the FIVE POINT STAR (finite-difference discretization) and LINPACK SPD BAND packages within the double precisionversion of the ELLPACK system (see [25]). One bonus from this arrangement was thatELLPACK automatically calculates the derivatives of any solution by means of built-inquadratic interpolation routines; this was used to compute the gradients∇up,i and∇up,ineeded in the evaluation of∇Gm(p).

10.5

0.5

1

5

y0

-0.5

z 1

1.5

2

x0

-0.5-1

10.5 1

0.5

0.9

5

y0

-0.5

z

1.8

2.7

x0

-0.5-1

10.5

(a) (b)

10.5

0.5

1

5

y0

-0.5

z 1

1.5

2

x0

-0.5-1

10.5 1

0.5

1

5

y0

-0.5

z

2

3

-0-1x0

0.5

10.5

(c) (d)

Figure 1. 25×25 grid. (a) TrueP2. (b) ComputedP2; 200 iterations. (c) TrueP4. (d) ComputedP4; 200 iteractions.

A variational algorithm for electrical impedance tomography 1521

10.5

0.5

1

5

y0

-0.5

z 1

1.5

2

x0

-0.5-1

10.5 1

0.5

0.5

5

y0

-0.5

z

1

1.5

x0

-0.5-1

10.5

(a) (b)

10.5

0.7

5

y0

-0.5

z

1.4

2.1

x0

-0.5-1

10.5 1

0.5

0.7

5

y0

-0.5

z

1.4

2.1

x0

-0.5-1

10.5

(c) (d)

Figure 2. z = P1(x, y); 49× 49 grid. (a) TrueP1. (b) N = 1; 200 iterations. (c)N = 2;200 iterations. (d)N = 3; 200 iterations.

The sensitivity of the algorithm to a change in the number of basis functions is illustratedin figure 2. It seems thatN = 2 (corresponding to five basis functions) suffices in theseexamples. In figure 3 one can see both the iterative stability and accuracy of the method.Here one is trying to image inclusions of different conductivities. It is clear that it isadvantageous to let the code run to convergence. This is in marked distinction to someof the previous methods [14, 28, 30] wherein instabilities would take over after a modestnumber of iterations, especially with noise present. In most situations it is imperative thatone be able to iterate an algorithm in order that one can squeeze out all the information inthe data; methods that do not allow such iteration invariably perform less than optimally.With these other methods it was also difficult to evaluate inclusion size and impossibleto evaluate magnitude [14, p 404 and figure 9]; as can be seen in figure 3(d) the present

1522 I Knowles

10.5

0.5

1

5

y0

-0.5

z 1

1.5

2

x0

-0.5-1

10.5 1

0.3

0.55

y0

-0.5

z

0.6

0.90 9

x0

-0.5-1

10.5

(a) (b)

10.5

0.3

0.6

5

y0

-0.5

z0 6

0.9

1.2

x0

-0.5-1

10.5 1

0.5

0.5

1

5

y0

-0.5

z 1

1.5

2

x0

-0.5-1

10.5

(c) (d)

Figure 3. z = P2(x, y); 25 × 25 grid. (a) TrueP2. (b) 10 iterations;L1-error 0.18.(c) 200 iterations;L1-error 0.14. (d) 3500 iterations;L1-error 0.078.

method fares well on both counts. As in [14] the convergence of the iterates is certainlynot pointwise, but appears to be more likeL1 convergence or something similar.

As always, behaviour in the presence of noise is an important part of the testing process.In the present case synthetic noise was added to the synthetic voltages (it was assumed thatthe applied currents would be known fairly accurately, and the measurement error wouldconcentrate mostly in the voltage part of the data). This noise, which was added to eachnode independently, was generated by a standard random number generator giving numbersuniformly distributed over the interval [−1, 1]. Some of the results may be seen in figure 4.Once again, the computed image showed a strong resilience in the presence of high iterationcounts; as one should expect, there was degradation at higher noise levels, but even withnoise as high as 10% and a fine grid, the image was still recognizable. We note that inpractical EIT systems the noise level in the data can be as low as 1%.

A variational algorithm for electrical impedance tomography 1523

10.5

0.6

5

y0

-0.5

z

1.2

1.8

x0

-0.5-1

10.5 1

0.5

0.7

1.4

5

y0

-0.5

z1 4

2.1

2.8

x0

-0.5-1

10.5

(a) (b)

10.5

0.9

1.8

5

y0

-0.5

z1 8

2.7

3.6

-0-1x0

0.5

10.5 1

0.5

0.5

5

y0

-0.5

z

1

1.5

x0

-0.5-1

10.5

(c) (d)

Figure 4. z = P2(x, y); 25× 25 grid; 2500 iterations. (a) 1% noise;L1-error 0.094. (b) 5%noise;L1-error 0.12. (c) 10% noise;L1-error 0.18. (d) 10% noise; 49× 49 grid.

Some representative run times (withN = 2, i.e. five basis functions) for the code runningon a Sparc1000 are shown in table 1. It is worth noting that the bulk of the computationalload is taken up in computing the solutionsup,i and up,i . The method is parallelizableto the extent that if one used a number of processors equal to the number,m, of basisfunctions employed, then the execution time of the code is essentially independent ofm.This parallelization would considerably reduce the above run times. It is also true that as oneis solving the nonlinear inverse problem directly (i.e. this is a ‘fully nonlinear’ method) theexecution times are considerably longer than most methods that involve ‘linearization’ of theinverse problem (for example, methods involving use of the Born or Rytov approximation).The advantage to the fully nonlinear methods is generally greater accuracy and the absenceof ‘phantoms’ and other artifacts; as computers become faster it is expected that the speeddisadvantage should become less significant.

1524 I Knowles

Table 1. Run times.

Average timeGrid for one descentsize step (s)

11 7.421 23.735 81.249 191.9

As a final observation, in the present situation, the coefficient functionp in (1.2) isconsidered to be a real scalar function. This corresponds to assuming that the conductivityis ‘isotropic’, i.e. roughly, that the conductivity is the same in all directions. It is morerealistic (though considerably more difficult) to consider the anisotropic case in whichp

becomes a positive definite symmetric matrix of functions (see [26]). One reason for thegreater difficulty in this case centres on the fact that the Dirichlet–Neumann map3p nolonger uniquely determines the coefficientp. For n = 2 it is known [19] that a sufficientlysmoothp is determined up to a diffeomorphic change of variables (equal to the identity on∂�), and the result is widely believed to be true forn > 2, though not proven as yet. Whilethere are no successful algorithms available that can handle this case, it is intriguing to notethat one can formulate an analogue of the functionalG in (1.7) for the anisotropic case, andmore importantly, that the valuesG(p) are theninvariant under the diffeomorphic changeof variable mentioned above. Thus, one is essentially minimizing over equivalence classesof matrix functions modulo the aforementioned variable change. It is possible that such analgorithm might facilitate stable anisotropic reconstruction.

Acknowledgment

The author was supported in part by US National Science Foundation grant DMS-9505047.

References

[1] Alessandrini G 1988 Stable determination of conductivity by boundary measurementsAppl. Anal.27 153–72[2] Barber D C and Brown B H 1984 Review article: applied potential tomographyJ. Phys. E: Sci. Instrum.17

723–32[3] Barber D C and Brown B H 1990 Progress in electrical impedance tomographyInverse Problems in Partial

Differential Equationsed D Colton, R Ewing and W Rundell (Philadelphia, PA: SIAM) pp 151–64[4] Bryan K and Vogelius M 1994 A computational algorithm to determine crack locations from electrostatic

boundary measurements: the case of multiple cracksInt. J. Eng. Sci.32 579–603[5] Budinger Tet al 1996Mathematics and Physics of Emerging Biomedical Imaging (National Research Council,

Institute of Medicine)(Washington, DC: National Academic Press)[6] Cheney M and Isaacson D 1992 Invariant embedding, layer-stripping, and impedanceInvariant Imbedding

and Inverse Problemsed J P Coroneset al (Philadelphia, PA: SIAM) pp 42–50[7] Cohen-Bacrie C, Goussard Y and Guardo R 1997 Regularized reconstruction in electrical impedance

tomography using a variance uniformization constraintIEEE Trans. Biomed. Imaging6 562[8] Friedman A and Isakov V 1989 On the uniqueness in the inverse conductivity problem with one measurement

Indiana Univ. Math. J.38 563–79[9] Grisvard P 1985Elliptic Problems in Nonsmooth Domains(Boston, MA: Pitman)

[10] Guardo R, Boulay C, Murray B and Bertrand M 1991 An experimental study in electrical impedancetomography using backprojection reconstructionIEEE Trans. Biomed. Eng.38 617–26

A variational algorithm for electrical impedance tomography 1525

[11] Hua P, Woo E J, Webster J G and Tompkins W J 1991 Iterative reconstruction methods using regularizationand optimal current patterns in electrical impedance tomographyIEEE Trans. Med. Imaging10 621–8

[12] Jain H, Isaacson D and Newell J C 1997 Electrical impedance tomography of complex conductivitydistributions with non-circular boundaryIEEE Trans. Biomed. Imaging44 1051

[13] Knowles I 1997 Parameter identification for elliptic problems with discontinuous principal coefficients,submitted

[14] Kohn R V and McKenney A 1990 Numerical implementation of a variational method for electrical impedancetomographyInverse Problems6 389–414

[15] Kyriacou G A, Kourkourlis C S and Sahalos J N 1993 A reconstruction algorithm of electrical impedancetomography with optimal configuration of the driven electrodesIEEE Trans. Med. Imaging12 430–8

[16] Loh W W and Dickin F J 1996 Improved modified Newton–Raphson algorithm for electrical impedancetomographyElectron. Lett.32 206

[17] Metherall P, Barber D C and Brown B H 1996 Three dimensional electrical impedance tomographyNature380 509

[18] Nachman A I 1988 Reconstructions from boundary measurementsAnn. Math.128 531–76[19] Nachman A I 1995 Global uniqueness for a two-dimensional inverse boundary value problemAnn. Math.

142 71–96[20] Neuberger J W 1997 Sobolev Gradients in Differential Equations (Lecture Notes in Mathematics 1670)

(New York: Springer)[21] Newell J C, Gisser D G and Isaacson D 1989 An electric current tomographIEEE Trans. Biomed. Eng.36

828–33[22] Paulson K, Lionheart W and Pidcock M 1993 Optimal experiments in electrical impedance tomographyIEEE

Trans. Med. Imaging12 681–6[23] Payne L E 1975Improperly Posed Problems in Partial Differential Equations(Philadelphia, PA: SIAM)[24] Press W H, Flannery B P, Teukolsky S A and Vetterling W T 1989Numerical Recipes: The Art of Scientific

Computing(Cambridge: Cambridge University Press)[25] Rice J R and Boisvert R F 1985Solving Elliptic Problems Using ELLPACK(Berlin: Springer)[26] Sylvester J 1990 An anisotropic inverse boundary value problemCommun. Pure Appl. Math.38 201–32[27] Vauhkonen M, Kaipio J P and Karjalaonen P A 1997 Electrical impedance tomography with basis constraints

Inverse Problems13 523[28] Wexler A, Fry B and Neumann M 1985 Impedance-computed tomography algorithm and systemAppl. Opt.

24 3985–92[29] Woo E J, Hua P, Webster J G and Tompkins W J 1993 A robust image reconstruction algorithm and its

parallel implementation in electrical impedance tomographyIEEE Trans. Med. Imaging12 137–46[30] Yorkey T J, Webster J G and Tompkins W J 1987 Comparing reconstruction algorithms for electrical

impedance tomographyIEEE Trans. Biomed. Eng.34 843–52[31] Ziemer W P 1989Weakly Differentiable Functions(Berlin: Springer)


Recommended