Date post: | 07-Jan-2016 |
Category: |
Documents |
Upload: | frank-vega-ortega |
View: | 11 times |
Download: | 0 times |
of 63
4Inverse Problemsin Potential Scattering
Khosrow Chadan
4.1. Introduction
Since the pioneering work of Gardner, Greene, Kruskal, and Miura in 1968,
Zakharov and Shabat in 1972, and others, who showed that inverse problem
techniques of potential scattering can be used to solve several nonlinear partial
dierential equations of mathematical physics (Kortewegde Vries equation for
the motion of shallow water waves, nonlinear Schrodinger equation for solitons
in optical bers, SineGordon equation, Boussinesq equation, etc.) and the
universality of the concept of solitons, a great amount of work has been devoted
to these problems. One is now able to solve a great number of these equations
with applications to various branches of physics where nonlinear equations are
derived by modeling important physical problems.
In other domains of physics also, such as electromagnetism (radar detec-
tion, medical imaging by electric elds) or acoustics (applications in geophysics,
tomography), other techniques have also been developed during recent years,
and are now used currently with great success.
The purpose of this chapter is to give an account of the inverse prob-
lem techniques in potential scattering. There are now many good books on
scattering theory, inverse problems, and solvable nonlinear partial dierential
equations. For those who wish to learn more about these problems and related
subjects, a list of references is given at the end.
4.2. Physical Background and Formulation of the Inverse Scattering
Problem
We consider here the scattering of a nonrelativistic particle by a xed center
of force, or what amounts to the same, the scattering of two such particles
by each other, after the motion of the center of mass has been removed. In
appropriate units of length, time, and mass, and when the interaction between
131
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
132 Inverse Problems
the two particles is local and represented by a local potential V (x), we have
to deal with the time-dependent Schrodinger equation
(2.1) i
t(t) = H(t),
where the Hamiltonian is given by the sum of kinetic energy and potential
energy:
(2.2) H = + V,
being the Laplacian in three dimensions. According to the rst principle of
quantum mechanics, the state of a system at each instant of time is represented
by a vector (the wave function) (t) in the Hilbert space of states. In the
case we deal with here, the Hilbert space of states is just L2(IR3). Since the
Hamiltonian is an unbounded operator in this space, care must be taken about
the domain of H. This domain will depend, of course, on V and is usually
extremely complicated, except when V L2, in which case D(H) = H2, H2being the Sobolev space. A way to avoid the diculties which arise is to
use quadratic forms, which are perfectly well suited for dening self-adjoint
operators which play a crucial role in quantum mechanics. Indeed, another
basic principle of quantum mechanics states that to each observable of the
system (positions of the particles, their momenta, energy) there corresponds
a self-adjoint operator in the Hilbert space of states. Observables must have
only real eigenvalues (or generalized eigenvalues). For details, see [5], [6].
For physical reasons, the potential V must be a real function. The rst
point is to look at the self-adjointness of the Hamiltonian. Indeed, the Hamil-
tonian operator corresponds to the total energy of the system, and since the
total energy is an observable, it must correspond to a self-adjoint operator,
which is one of the basic principles of quantum mechanics.
There are a variety of conditions under which H is self-adjoint. An illus-
trating example in one dimension is given in Chapter 1. Since we are interested
in scattering theory, i.e., particles move freely when they are far apart from
each other, we must assume that V 0 as |x| . One such condition isV L2(IR3). A better one, more suitable for scattering theory, is V L1L2.It can been shown that this implies the condition
(2.3)
|V (x)| |V (y)||x y|2 d
3xd3y
Inverse Problems in Potential Scattering 133
Once the self-adjointness of the Hamiltonian is established, we must look at
its spectrum. Under the above conditions, for instance, the Rollnik condition
(2.3), one can show that the spectrum is made of two parts: a continuous part
(absolutely continuous) extending from 0 to +, and a discrete part, contain-ing a nite number of eigenvalues, each with nite multiplicity, all negative.
For the proof of this fact in the case where V is a characteristic function of an
interval see 1.8 in Chapter 1. The negative eigenvalues correspond to boundsystems, when the two particles stay together for all times. For this reason,
these states are called bound-states.
What can be said then on the (absolutely) continuous part of the spectrum?
A careful study shows that it corresponds to scattering states, i.e., to (x, t)
such that for any bounded region of space ,
(2.4) limt
|(x, t)|2 d3x = 0,
where
(2.5) (x, t) =
eiEt (k, x) f(k) d3k,
E = k2, f(k) L2(IR3), and (k, x) is the solution of
(2.6) H(k, x) = E(k, x).
What one would like to have is that (2.4) holds for all f(k) L2. This wouldmean that no matter how the wave-packet (x, t) is made, particles go far
apart from each other when time becomes large, so that the probability of
nding them together in an arbitrary nite region of space goes to zero. If this
is the case, then all of the continuous spectrum is made of scattering states,
and this is called asymptotic completeness.
Asymptotic completeness can be shown to hold also under various condi-
tions on the potential. For instance, the condition V L1 L2 is sucientto guarantee asymptotic completeness. Another condition is again the Rollnik
condition (2.3).
Once the asymptotic completeness is established, one can go from the time-
dependent description (2.1) to the time-independent description. Combining
the Schrodinger equation (2.6) with the asymptotic condition that, at t = ,particles move toward each other with relative momentum k (more precisely,
that we have only an incoming plane-wave without any spherical incoming
wave), one is lead to the celebrated LippmannSchwinger equation (k = |k|):
(2.7a) (k, x) = eik.x 14
eik|xy|
|x y| V (y) (k, y) d3y.
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
134 Inverse Problems
If the solution of this integral equation, assumed to be unique, is then used in
(2.5), one can show that, when t +, the wave-packet is made of the sameincoming wave-packet, attenuated, plus an outgoing spherical wave, which is
the scattered wave produced by the interaction at nite times. Equivalently,
one can show that the solution of (2.7a) satises the asymptotic condition (the
Sommerfeld radiation condition)
(2.7b) limr r
(sr
iks)
= 0 , r = |x|,
where s = (k, x) eikx.This integral equation has many defects. The rst one is that its kernel
is singular; it becomes innite at x = y. However, this singularity is mild
and there are well-known mathematical tools to deal with it. It is easily seen
that it is sucient to iterate the integral equation (2.7) once in order to get a
nite kernel. The second defect is that the inhomogeneous term is not square-
integrable. This also can be easily taken care of using the BirmanSchwinger
trick, which is to multiply both sides of (2.7) by |V |1/2. Writing sgn(V ) =V/|V |, which is a bounded function having two values 1, we then get
(2.8) (k, x) = 0(k, x) 14
G(k, x, y) sgn(V (y)) (k, y) d3y,
where
(2.9) 0(k, x) = |V (x)|1/2eik.x
and
(2.10) G(k, x, y) = |V (x)|1/2 eik|xy|
|x y| |V (y)|1/2.
When V L1, 0 becomes L2, whatever k (real) is. If the Rollnik condition(2.3) holds also, the operator with kernel G can easily be seen to be a Hilbert
Schmidt operator, and especially a compact operator, for all real k 0, andthis simplies life greatly because then we can use all the classical theorems
about Fredholm integral equations and show the existence and uniqueness of
the solution. This is why the Rollnik condition plays an important role in
scattering theory. For a detailed study of all these problems, we refer the
reader to Simon [5].
After having shown the existence and the uniqueness of the solution of
(2.7) or (2.8) for all real k, one can then show that the scattering amplitude
A(k, k), dened by the asymptotic form of (k, x) when x:
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 135
(2.11) (k, x) = eik.x +A(k, k)eik|x|
|x| + ,
where k = k(x/|x|), is given by
(2.12) A(k, k) = 14
eik.x V (x) (k, x)d3x.
In short, one can take the limit x in (2.7a) under the integral sign. Notethat k2 = k
2= E.
After scattering amplitude is found, one shows that the cross-section d,
i.e., the probability of nding the particles moving outward far from the scat-
tering center, with momentum k, in the solid angle d around the directionk per unit of time and per unit ux of the incoming particles with momentumk, is given by
(2.13) d =A(k, k)2 d.
The quantity d is what is experimentally measured by putting counters
far from the scattering center to measure the ux of the outgoing particles.
An important problem particle physicists are confronted with is to now nd A
from d, i.e., to nd A from its absolute value. This important problem, which
is not yet solved in its full generality, lies outside the scope of this chapter, and
we refer the reader to [7, Chap. 10] for a detailed study of it and references.
So far, we have been dealing with the (unique) solution of (2.7) for real k,
k2 = E belonging to the continuous part of the spectrum of the kernel (the
continuous part of the spectrum of H). As for those in the discrete spectrum
(isolated eigenvalues of H) corresponding to the solutions of the homogeneous
equation, i.e., (2.7) without the rst term in the righthand side, they are all
situated on E 0, are nite in number, and each have nite multiplicity. Letus call them E1 < E2 < E3 < < En 0.
The general inverse problem, which is nding the potential from the scat-
tering data {A(k, k)|k, k IR3,k2 = k2
} {Ej |j = 1, 2, . . . , n} ,
is a very complicated problem and has not yet been completely solved, although
great progress has been made in recent years toward its complete solution. For
the state of the art on this general problem, we refer the reader to [9].
The simplest inverse problems, which have been given complete satisfactory
solutions and are the subject of these lectures, are obtained when the potential
has spherical symmetry: V (x) = V (|x|). In this case, one can decompose thewave function, the solution of (2.7), in partial waves:
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
136 Inverse Problems
(2.14) (k, x) =4
kr
=0
(2+ 1)iP(cos ) (k, r), |x| = r,
where is the angle between r and k : cos = (k.r)/kr, and the solution
of the radial Schrodinger equation
(2.15)
[ d
2
dr2+(+ 1)
r2+ V (r)
](k, r) = E (k, r),
to which one has to add the boundary condition
(2.16) (k, 0) = 0.
The asymptotic behavior of when r is then shown to be
(2.17) (k, r) = ei sin
(kr 1
2 +
)+ o(1).
The quantity , called the phase shift, is a real function of k for each . In
terms of , the scattering amplitude can be written
(2.18) A(k, k) =1
k
=0
(2+ 1) ei sin P(cos ),
where cos = (k k)/k2, since k2 = k2 = E, E being the energy of theincoming or outgoing particles. The partial-wave Hamiltonian for each ,
(2.19) H = d2
dr2+(+ 1)
r2+ V (r)
under suitable conditions on V , is a self-adjoint operator in L2(0,), and itsspectrum is made of a continuum extending from 0 to +, and a few negativeeigenvalues E1 < E2 < < En 0. One suitable condition on the potentialis the following:
(2.20)
0
r|V (r)|dr
Inverse Problems in Potential Scattering 137
Here, two natural inverse problems occur. The rst one is at xed en-
ergy. We perform scattering experiments at a given energy E and we get the
scattering amplitude (2.18), where k =E. Projecting A on the Legendre
polynomials, we obtain, in principle, all the phase shifts . One may then ask
the following question: knowing all these numbers , would it be possible to
obtain the potential? This problem is of some interest in nuclear physics and
has been completely solved. However, the solution is not in general unique and
there are many ambiguities.
The second inverse problem is at xed . We are dealing with (2.15) for
a given . Knowing V (r), we can solve it and get (k) for all values of k, as
well as the energies of the possible bound states (negative point spectrum of
H). Conversely, knowing (k) for all k [0,) and the negative spectrumE1 En for this same , we would like to nd the potential. This is the prob-lem that we are going to study in this chapter. We shall see how to compute
the potential from the scattering data {(k)|k [0,)} {E1, E2, En}.This inverse problem on the half line r [0,), when extended to the full linex (,), has been instrumental in the solution of many partial dierentialequations of mathematical physics.
4.3. Scattering Theory for Partial Waves
We have to study the radial Schrodinger equation for a given = 0, 1, 2, . . . :
(3.1)d2
dr2(k, r) + E =
[V (r) +
(+ 1)
r2
],
(3.2) r [0,), (k, 0) = 0, E = k2,
and nd the asymptotic properties of the solution for large r, (2.17), as well
as the properties of the phase shift (k) as function of the momentum k.
We shall begin rst with some general remarks on the dierential operator
(the Hamiltonian)
(3.3) H = d2
dr2+(+ 1)
r2+ V (r)
together with the Dirichlet boundary condition given in (3.2). It can be shown
that if the potential, assumed to be real, satises the condition
(3.4)
0
r |V (r)| dr
138 Inverse Problems
Then, depending on the sign of the potential V , it may or may not have a
point spectrum on the negative E-axis. Indeed, assume rst that the potential
is positive everywhere (repulsive potential in the terminology of physicists).
Now, we know that the operator d2/dr2 acting in L2(0,) with Dirichletboundary condition at r = 0 is a positive operator. Therefore, if the potential
is positive everywhere, the operator H given by (3.3) with Dirichlet boundary
condition at r = 0 is also positive for all 0. It follows that H cannot havenegative eigenvalues. In order to have negative eigenvalues, the total potential
(3.5) Vtot V (r) + (+ 1)r2
must be negative somewhere (attractive in the language of physicists). If this
negative part is strong enough, one may then have some negative eigenvalues.
In any case, if (3.4) is satised, one can show that the point spectrum is nite,
nondegenerate, and bounded from below. The number of these eigenvalues
is denoted by n and we may order them in increasing order: < En 0,and the half-circle of radius K in the upper k-plane, on which k = K exp(i),
0 , we get
(t, r) contour
(k, r) eikt dk =
KK
(k, r) eikt dk
+i
0
(K ei, r) eikt K ei d = 0.
Now, if t > r, the last integral goes to zero as K because of (3.24) andthe damping factor exp(Im k t) = exp(Kt sin ). Therefore,
(3.27) (t, r)
(k, r) eikt dk = 0, t > r
no matter how close t is to r. Moreover, since the integrand is L1(,),we know that (t, r) is a continuous function of t. Therefore, (r, r) = 0 by
continuity. The same conclusions can be reached for t < r if we close ourcontour of integration by a large half-circle in the lower half of the k-plane.
And again, (r, r) = 0.(t, r), for each real nite value of r, is a continuous function of t and
vanishes outside the interval (r, r). Inverting (3.27), which causes no conver-gence problem, and remembering that (k, r) is a real and even function of k
(real for real k), we get
(k, r) = (k, r) sin krk
=1
2
rr
(t, r) eikt dt
=1
2
rr
(t, r) eikt dt
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
144 Inverse Problems
=1
rr
(t, r) cos kt dt =1
r0
[(t, r) + (t, r)] cos kt dt
=2
r0
[(t, r)] d
(sin kt
k
)= 2
r0
t[(t, r)]
sin kt
kdt.
Writing
(3.28) K(r, t) 2
t(t, r),
we have the following theorem.
Theorem 4.3.2. The regular solution has the integral representation
(3.29) (k, r) =sin kr
k+
r0
K(r, t)sin kt
kdt,
where K(r, t), given by (3.28), is independent of k and is an integrable function
of t.
We must show that (3.28) is meaningful, that is, we can dierentiate
with respect to t. This is indeed the case for r < t < r. The reason is thatwe have (3.24) and (3.21). Dierentiating (3.27) with respect to t under the
integral sign, we get an extra k-factor. So, for t < r, we still have an oscillating
integrand which goes to zero as k . By the Abel theorem, the integralis (simply) convergent. For t = r, the integral is usually innite, but what we
really need is the integrability of K(r, t) at t = r, that is, of (/t)(t, r) at
t = r, and this also can be shown with a more careful analysis. We refer the
reader to the literature for more details; see [7].
As is obvious from (3.29), we could have started from a Fourier sine trans-
form instead of (3.27) to dene K directly:
(3.28) K(r, t) = 0
[(k, r) sin kr
k
]sin kt
k
(2
k2)dk
=1
2i
[(k, r) sin kr
k
]eikt
k
(2
k2)dk.
The reason for introducing the factor (2/)k2 will become clear in 4.3.3. Wecan now close the contour again by a large half-circle in the upper k-plane,
and we nd that K(r, t) = 0 for all t > r. Inverting the Fourier sine transform
(3.28), we get, of course, (3.29). Both (3.28) and (3.28) are useful, as we shallsee later, for generalizing (3.29), and it is easily checked that K dened by
(3.28) is identical to (3.28).
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 145
From (3.28) one can show that
(3.29) K(r, 0) = 0,[t
tK(r, t)
]t=0
= 0.
This will also be shown by a dierent method below, based on (3.34).
Theorem 2 is, in essence, the following celebrated PaleyWiener theorem
for entire functions of exponential type which are L2 on the real axis.
Theorem 4.3.3 (PaleyWiener). The entire function f(z) is of expo-
nential type and belongs to L2(,) on the real axis if and only if
f(z) =
eizt (t) dt,
where
(t) L2(, ).Moreover, if f(z) is given by the above representation and (t) does not vanish
almost everywhere in any neighborhood of (or ), then f(z) is of order 1and type (that is, not of exponential type less than ).
What we have shown for (k, r) is, in fact, nothing more than the rst
half of the above theorem. The second part of the theorem guarantees that
if the type of the function is , then the support of cannot be smaller than
(, ) at least at one end of the interval. This is indeed the case here for K att = r. We should note here that is analytic in k and goes to zero as k .Therefore, if it is L1 at , it is also L2 there.
The integral representation (3.29) is the rst of the three ingredients for
solving the inverse problem. The second is the relation between the kernel K
and the potential, and we shall establish it now.
So far, (3.29) is purely formal, in the sense that any exponential function of
k with appropriate properties can be written in that form, without any relation
to a second-order dierential equation. We must now take into account that
, given by the above representation, indeed satises the Schrodinger equation
(3.9). Dierentiating twice with respect to r under the integral sign and doing
two integrations by parts (with respect to t), we nd formally
(3.30)2K
r2
2K
t2= V (r) K(r, t), 0 t r
and
(3.31) V (r) = 2d
drK(r, r) = 2
[K
r+K
t
]t=r
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
146 Inverse Problems
provided K(r, 0) = 0 and (tK|t)|t=0 = 0. The relation (3.31) is the secondingredient for the solution of the inverse problem. Equation (3.30) is a partial
dierential equation of hyperbolic type, and we consider it so far in 0 t r.However, if K(r, 0) = 0, we can extend the denition of K for r t 0 justby writing
(3.32) K(r,t) = K(r, t).
In this way, K becomes an odd function of t in r t r. Now, the twocharacteristic curves of our hyperbolic equation are just the lines t = r.Since K is odd in t, we now have K(r,r) = K(r, r), r 0. However, thislast quantity is, by (3.31), just the integral of V . If V is integrable at the
origin, which may not be the case in general if we assume only (3.4), then
(3.33) K(r, r) = K(r,r) = 12
r0
V (t) dt.
If we have only (3.4), we can keep the dierential version (3.31).
In any case, we now have a hyperbolic equation in the fundamental domain
r t r bounded by two characteristics t = r, and we are given the valueof the function on these two characteristics, or its total derivative with respect
to r. In both cases, it is known that the hyperbolic equation has a unique
solution under our conditions on these boundary values. Similar conclusions
were also reached in Chapter 3.
Remark. Formula (3.29) should not be confused with (3.12). The second
one is an (Fourier) integral representation and the rst is a Volterra integral
equation. Given the potential V , this integral equations provides us with the
solution of the dierential equation (3.9). Alternatively, given V , we can
solve the partial dierential equation (3.30), which is now independent of E
(or k), and from its solution K nd from (3.29).
An equivalent way of doing the same thing is to use the integral equation
(3.12) instead of the dierential equation (3.9) since the two are equivalent.
Replacing (3.29) into (3.12) and taking the Fourier sine transform, we get the
integral equation in t, for each xed r,
(3.34) K(r, t) =1
2
r+t2
rt2
V (s)ds+
r+t2
rt2
ds
rt2
0
V (s+u) K(s+u, su) du,
which can also be used to prove the existence and uniqueness of K. Setting
t = r, we get, as expected, (3.33) i.e., the integral version of (3.31). Setting
t = 0 in (3.34), we check that K(r, 0) = 0. We do the same for (tK/t)|t=0 =0. Dierentiating twice with respect to r and t, we nd, as expected, (3.30).
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 147
To show the existence and uniqueness of the solution of (3.34), one can
again use the iteration method, and everything goes through without any prob-
lem. One can then check all the assumptions made on K and its derivatives
and get the bound
(3.35) |K(r, t)| 0, E > 0,
(3.48b)
0
(E, r) (
E, r)dr =
(E E)(d/dE)
.
The above relations can be interpreted as the generalized orthogonality
relation between the (generalized) eigenfunctions. For the true eigenfunctions
(point spectrum), we have, of course,
(3.48c)
0
j(r) (r)dr = C1j j,
and
(3.48d)
0
(E, r) j(r) dr = 0, E 0.
When we introduce the physical solution (3.74) later, we will see that (3.48b)
can be written similarly to (3.40).
4.3.4. The Jost Solution. The Jost solution f(k, r) of the Schrodinger
equation (3.9) is dened by the boundary conditions at innity:
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 151
(3.49)
limr eikrf(k, r) = 1,
limr eikrf (k, r) = ik.
Again, we can try to combine the dierential equation with these boundary
conditions. The procedure is the same as for the regular solution . It is easily
veried, formally, that we get
(3.50) f(k, r) = eikr r
sin k(r t)k
V (t) f(k, t) dt
and
(3.51) f (k, r) = ikeikr r
cos k(r t) V (t) f(k, t) dt.
The boundary conditions are satised if the integrals are convergent.
To solve the Volterra integral equation (3.50), we again use the iteration
method, starting from
(3.52) f (0)(k, r) = eikr
and writing
(3.53) f (n)(k, r) = eikr r
sin k(r t)k
V (t) f (n1)(k, t) dt.
Using the bounds
(3.54)f (0)(k, r) = eIm k.r
and
(3.55)
sin k(r t)k < C (t r)1 + k(t r)e|Im k|(tr),
valid for t r, we nd, for Im k 0,
(3.56)f (n)(k, r) eIm k.r [1 + CJ(k, r) + C22!J2(k, r) + + C
n
n!Jn(k, r)
],
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
152 Inverse Problems
where
(3.57a) J(k, r) =
r
t
1 + |k|t |V (t)| dt 0
t |V (t)| dt 0 for
each xed value of r, and the convergence is uniform, the Jost solution is
also holomorphic in Im k > 0, and continuous in Im k 0. Also, sincef (0)(k, r) = [f (0)(k, r)], the same is true for f (n), and therefore for f . Notethat if Im k 0, we have also Im(k) 0, and both k and k are inthe domain of denition of f . All these can be summarized in the following
theorem.
Theorem 4.3.4.
(a) Under the condition (3.4), the Jost solution dened by boundary con-
ditions (3.48) and (3.49) exists and is unique.
(b) It is analytic in Im k > 0 and continuous in Im k 0, and we havethe bound (3.58).
(c) In Im k 0, we have
(3.60) f (k, r) = [f(k, r)] .
(d) In the closed upper plane Im k 0, and for each xed nite r, we havethe asymptotic behaviors
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 153
(3.61) f(k, r) = eikr [1 + o(1)] , |k| ,
(3.62) f (k, r) = ikeikr [1 + o(1)] , |k| .
Remark. The integral in (3.50) extends to innity. Therefore, contrary to
the case of , we must restrict k to Im k 0. Otherwise, in the sequence of suc-cessive iterations f (n), we would get integrals of the form
r|V (t)| exp[(Im k
+|Im k|)(t r)]dt, which are generally divergent in Im k < 0. In the upperk-plane, the exponential disappears, and we get (3.56) and (3.57).
Now that we have precise solutions and f of the dierential equation,
corresponding to boundary conditions at two dierent points, we can try to see
the relations which may exist between them. This leads us to the denition
of the Jost function, which plays a crucial role in both the direct and inverse
problems.
4.3.5. Jost Function. When k is real and = 0, the two solutions f+ =f(k, r) and f = f(k, r) are independent of each other. Indeed, if we calculatetheir Wronskian, which is independent of r and can therefore be calculated at
r =, and using (3.48) and (3.49), we nd
(3.63) W [f, f+] ff + f f+ = 2ik = 0.
Therefore, any other solution of the dierential equation can be written as a
linear combination, with coecient independent of r, of f+ and f, in the sameway as cosine and sine are written as linear combinations of eikr and eikr.
For instance, the regular solution can be written as
(k, r) =1
2ik[G(k)f+ + F (k)f] ,
where we have introduced 2ik for convenience. Taking into account that is
an even function of k, we nd G(k) = F (k). So, nally, for all real k,
(3.64) (k, r) =1
2ik[F (k)f(k, r) F (k)f(k, r)] .
The function F (k) is called the Jost function. It is given, by denition, by
the Wronskian
(3.65) F (k) = W [f(k, r) , (k, r)] .
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
154 Inverse Problems
This denition of F (k) makes sense for all k in Im k 0. From the symmetryproperties of and f , (3.22), and (3.60), it follows that
(3.66) F (k) = [F (k)] , Im k 0.
Since is holomorphic for all nite k, and f is holomorphic in Im k > 0 and
continuous in Im k 0, it follows that the Jost function is holomorphic inIm k > 0, and continuous in Im k 0. When k is real positive, we can write
(3.67) F (k) = |F (k)| ei(k), k 0,
where (k) is the phase of F , dened modulo 2. We will soon see how todene it in a precise way.
From the bounds for f and f , it is easily veried that rf (k, r) 0 asr 0. This, when used in (3.65), gives
(3.68) F (k) = f(k, 0).
Another denition of the Jost function is obtained by using (3.12) in (3.65)
and letting r . From the boundary conditions for f , (3.48), and (3.49), itfollows that
(3.69) F (k) = 1 +
0
eikr V (r) (k, r) dr.
This integral representation is quite useful. If we use the bound (3.23) for
in (3.69), we immediately nd the asymptotic behavior of F (k):
(3.70) F (k) = 1 + o(1), |k| , Im k 0.
We can therefore choose, accordingly, the determination
(3.71) () = 0
and then nd, by continuity, (k) for nite values of k 0. Because of (3.66),we also have
(3.72) (k) = (k), k 0.
Another property of F(k) is that it never vanishes on the real axis. Indeed,
if F (k0) = 0, k0 > 0, then, by (3.66), F (k0) = 0. Then it follows from
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 155
(3.64) that (k0, r) = 0 for all r, and this contradicts the boundary conditions
(k0, 0) = 1 because and are both continuous functions of r.
4.3.6. Zeros of F (k) in Im k > 0. Suppose that F (k0) = 0, Im k0 > 0.
This means, according to (3.65), that (k0, r) and f(k0, r) are not independent
solutions. Therefore, (k0, r) = A f(k0, r), where A is a constant = 0. Since,for r , the righthand side behaves as eIm k0.r, the same is true for .It follows that (k0, r) L2(0,). Since satises the dierential equationand the boundary conditions, it follows that E0 is an eigenvalue. However,
we saw that the dierential operator together with the boundary condition is
self-adjoint. This means that E0 = k20 is real and negative and, therefore, that
Re k0 = 0, Im k0 > 0. E0 is one of the point spectrum of the Hamiltonian,
{Ej , j = 1, n}, and (k0, r) is its eigenfunction.Conversely, suppose that Ej = 2j is one of the eigenvalues. Then one
must have F (ij) = 0. Indeed, if this were not the case, it would mean that
j = (ij , r) and fj = f(ij , r) are independent of each other. By denition,
j is the eigenfunction corresponding to the eigenvalue Ej because we saw
that, for all nite values of E, the solution is unique. Therefore, j is square
integrable, which means that j() = 0. From the dierential equation,it follows that j () = 0. Now since both j and j are continuous andj() = j () = 0, it follows that j() = 0. If we now use these resultsin (3.65) for r , we nd that F (ij) = 0. There is therefore a one-to-onecorrespondence between the eigenvalues Ej = 2j , and the zeros of the Jostfunction on the positive imaginary axis. It can also be shown that all these
zeros are simple, and that the spectrum is nondegenerate. We therefore have
the following theorem.
Theorem 4.3.5.
(a) The Jost function F (k), dened by (3.65), or (3.68) or (3.69), is holo-
morphic in Im k > 0 and continuous in Im k 0.(b) We have the symmetry property (3.66) and the asymptotic property
(3.70).
(c) The phase of the Jost function on the real axis can be dened by conti-
nuity for all real k from (3.71).
(d) There is a one-to-one correspondence between the eigenvalues Ej =
(ij)2 and the zeros kj of the Jost function on the positive imaginary axis:
kj = ij, j > 0, j = 1, 2, . . . , n.
(e) There are no other zeros of F in Im k 0. In particular, F cannotvanish for real values of k.
Remark. It may happen that one of the zeros of the Jost function is at
k = 0. This does not correspond to a true bound state with a square integrable
eigenfunction, but to a resonance at zero energy.
4.3.7. The Physical Solution and the Phase Shift. We now consider
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
156 Inverse Problems
the asymptotic behavior of the regular solution (k, r) for k real positive and
r . Using the formula (3.64) together with (3.48) and (3.67), we nd
(3.73) (k, r) =r
|F (k)|k
sin (kr + (k)) + .
Now both the physical solution given by (2.17) and vanish at r = 0.
They are therefore proportional to each other and we have, for Im k 0,
(3.74) (k, r) =k
F (k)(k, r).
This means that the phase shift, dened for k real along the phase of ,
is identical to (k), which is minus the phase of the Jost function. In (3.74),
although is holomorphic in all of the nite k-plane, F (k) is analytic only in
Im k > 0. Therefore, in general, the relation makes sense only in Im k 0. Inthis half-plane, the poles of the zeros of F (k)correspond to the bound
states and vice versa.
Remark. All these properties show the importance of the Jost function,
F (k). From this single function, we can get the phase shift, given by minus the
phase of F for k real positive (positive energies), and the eigenvalues (negative-
energy bound states) given by its zeros on the positive imaginary axis.
It is easily shown that the physical solution satises the integral equation
(3.75) (k, r) = sin kr 0
sin kr< eikr>k
V (r) (k, r) dr,
where r< = min(r, r), and r> = max(r, r). Contrary to (3.12), this is now aFredholm integral equation, and therefore we have the Fredholm alternative.
The eigenfunctions j(r) are solutions of the homogeneous equation. It can be
shown that the Jost function is the Fredholm determinant of the above integral
equation. To be more precise, if we introduce a parameter in front of the
integral operator in (3.75), and write it symbolically as = 0 + K, we
know that if the kernel K is L2, the solution of (3.75) is given in general by
= 0 + N()0/D(), where N is an integral operator acting on 0 and
D is a number. N and D are called Fredholm determinants (numerator and
denominator, respectively) and are both entire functions of . If 0 is a zero
of D, D(0) = 0, then the homogeneous equation has solutions and 10 is
an eigenvalue of the kernel K. For (3.75) with in front of the integral, one
can show that F (; k) D(; k). It is therefore quite natural that the zerosof the Jost function should give the eigenvalues. We conclude this section by
rewriting the completeness relation (3.44) in terms of the physical solution .
From (3.75) it can be shown that if kj = ij corresponds to the eigenvalues
Ej = 2j , then j(r) is normalized at innity by
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 157
limr e
jrj(r) = 1.
Dening C j by
C j
0
2j (r)dr = 1,
we get
2
0
(k, r) (k, t) dk +
C j j(r) j(t) = (r t),
which looks more like the familiar completeness relation (3.40) for Fourier
transforms. Similarly, the generalized orthogonality (3.48b) now becomes like
the relation for sine transform:
2
0
(k, r) (k, r)dr = (k k).
4.3.8. The Levinson Theorem. We know that the Jost function F (k) is
holomorphic in the upper half-plane Im k > 0 and continuous in Im k 0.Suppose we have n eigenvalues (bound states) corresponding to the zeros kj =
ij , 0 < 1 < 2 < < n, and let us make a vertical cut joining kn = in tothe origin. In this half-plane with the cut, Log F (k) is also holomorphic, and
Log F () = 0.Because the phase shift is minus Im logF , we get, by following the closed
contour shown on Figure 1, the variation shown. Therefore, remembering that
the zeros are all simple, we get
(+) (+0) + 2n + [(0) ()] = 0.
Figure 1. Variation Log F = 0.
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
158 Inverse Problems
Because the phase shift is an odd function of k, we nally get
(3.76) (+0) () = n.
This is the Levinson theorem, which relates the number of bound states to
(+0) (). Choosing the determination (3.71), we have
(3.77) (0) = n.
Remark. It may happen that F (0) = 0, i.e., one of the zeros of F is just
at the origin. It can be shown that it is also simple. In this case, we get
(3.78) (0) = n +1
2,
n being the number of true bound states with negative energies and k = 0
corresponding to a resonance at zero energy.
4.3.9. Some Integral Representations. We have already seen the integral
representation (3.29) for the regular solution . It is a consequence of the
PaleyWiener theorem and its variants for entire functions. For other functions
of interestJost solution, Jost function, phase shifts S-matrix, etc.some of
them analytic in the upper half-plane, and some dened in general only for
real values of k, it is possible to obtain similar integral representations.
The rst integral representation is for the Jost solution, and we will see
that from it we can deduce all the other integral representations. From the
analyticity of f(k, r) in the upper k-plane, and its asymptotic behavior (3.59),
it is clear that
2 A(r, t) =
[f(k, r) eikr] eikt dk = 0 , t < r,
where the contour is made of the real axis and a large semicircle in the upper
plane. Because the integral over this large semicircle is itself zero for t < r, it
follows that
(3.79) 2 A(r, t) =
[f(k, r) eikr] eikt dk = 0 , t < r.
Now, from (3.50), (3.55), and (3.57b), it is easily seen that, for k real, we
have, for large values of k,
|f(k, r) eikr| < C|k| r
t|V (t)|dt.
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 159
Therefore, the integrand in (3.79) is L2(,). It follows that we can invert(3.79):
(3.80) f(k, r) = eikr +
r
A(r, t) eikt dt,
where A(r, t) L2(r,) in the variable t. This is, in essence, part of thefollowing general theorem.
Theorem 4.3.6 (Titchmarsh). A necessary and sucient condition for
F (x) L2(,) to be the limit as y 0 of an analytic function F (z =x+ iy) in y > 0 such that
|F (x+ iy)|2 dx = O (e2y)
is that
F (x)eitx dx = 0
for all values of t < .The above integral representation (3.80) is on the basis of the Marchenko
method for solving the inverse problem, as we shall see later. For the time
being, let us make r = 0 in (3.80). We then get, for the Jost function, the
integral representation
(3.81) F (k) = f(k, 0) = 1 +
0
A(t) eikt dt,
where A(t) = A(0, t) L2(0,). However, we can do better concerning thebehavior of A(t) as to t and replace L2 by L1. For this purpose, we usethe representation (3.69) for the Jost function, together with (3.29). We then
obtain
F (k) = 1 +
0
eikr V (r)sin kr
kdr +
0
eikr[ r
0
K(r, t)sin kt
kdt
]dr.
Let us look now at the rst integral of this formula. It can be written as
1
2
0
V (r) dr
2r0
eikt dt =1
2
0
eikt
[ t2
V (r) dr
]dt.
Now the integrand satises
0
dt
t2
V (r) dr
0
dt
[ t2
|V (r)|dr]=
0
|V (r)|dr 2r0
dt
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
160 Inverse Problems
= 2
0
r|V (r)|dr
Inverse Problems in Potential Scattering 161
Now, G(k) is analytic (holomorphic) in the upper half-plane, continuous on
the real k-axis (Im k = 0), and does not vanish in the closed upper half-plane
Im k 0. Moreover,
(3.84) lim|k|
G(k) = 1, Im k 0.
We can therefore apply Theorem 4.3.7 to Log G(k), with Log G() = 0.The net result is
Log G(k) =
0
g(t) eikt dt,
where g(t) L1(0,). The fact that g(t) vanishes for t < 0 is again theconsequence of the analyticity of Log G(k) in the upper k-plane. Going back
now to F (k), we nd, for k in the upper plane Im k 0,
(3.85) F (k) =n
j=1
(k ijk + ij
)e
0
g(t)eiktdt, g(t) L1(0,).
As we already mentioned, the completeness relation (3.44) is one of the
essential tools for solving the inverse problem. It contains the modulus of
the Jost function for real values of k, and we have to show that this can be
calculated from the phase shift. There are two methods for doing this. The
rst is to remember (3.67). It follows that, for k real,
(3.86) (k) = Im Log F (k) = in
j=1
Log
(k ijk + ij
) 0
g(t) sin kt dt,
g(t) L1(0,).From the knowledge of (k) for all k 0 and the bound state energies, we cannow calculate g(t) by inverting the above Fourier sine-transform. Once g(t) is
known, we then have F (k) from (3.85).
The second method is to use the Hilbert transforms (called dispersion re-
lations by physicists). The main theorem to be used now is the following.
Theorem 4.3.8. Suppose f(x) L1(,), and f(k) is its Fouriertransform
f(k) =
f(x) eikx dx.
We know that f is a continuous function of the real variable k. Let us denote
by X(k) and Y (k) the real and the imaginary part of f , both continuous. Then,
for all k, we have
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
162 Inverse Problems
X(k) = lima
1
P
aa
Y (k)k k dk
,
Y (k) = limaP
aa
X(k)k k dk
,
where the symbol P in front of the integrals means the principal value. These
two relations are called KramersKronig relations.
We can apply this theorem directly to Log G(k), given by (3.84). The
result is
Re Log G(k) = Log |F (k)| = 1P
(k)k k dk
,
where, according to (3.83) and (3.67), (k) is given by
(k) = 2j
Arctgjk
+ (k).
In the above integral, we need the values of (k) for negative k, and these are
provided by (3.72). Putting all the pieces together, we nd
(3.87a) |F (k)| =j
(1 +
2jk2
)e
1P
(k)kk dk
and
(3.87b) F (k) =j
(1 +
2jk2
)e
1
(k)kk dk
.
In the rst formula, k is real, whereas in the last formula, Im k > 0. Putting
k = Re k + i, and making 0, we get (3.87a) and (3.67) in the limit, byusing
lim0
1
k k i = P1
k k + i (k k).
Another integral representation which will be useful later is the one for the
S-matrix, dened for real values of k only:
(3.88) S(k) =F (k)F (k)
= e2i(k), k real.
We already have (3.81) and (3.82) for F (k). Consider now the function 1/F (k).
In the WienerLevy theorem, we can choose G(z) = 1/z, and this function is
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 163
analytic everywhere except at z = 0. Since F (k) does not vanish when k varies
from to + on the real axis, and F () = 1, we can apply the theoremand we get
(3.89)1
F (k)= 1 +
B(t) eiktdt, B(t) L1.
Therefore,
F (k)F (k)
=
[1 +
0
A(t) eikt dt] [
1 +
B(u) eiku du
]= 1 +
0
A(t) eikt dt+
B(t) eikt dt
+
0
A(t) eikt dt
B(u) eiku du.
The last term is the product of the Fourier transforms of two L1 functions,
and it can be written as the Fourier transform of the convolution of A and B:
eikx[
0
A(t) B(x+ t) dt
]dx.
However, the integral in the bracket is the convolution of two L1 functions,
and we have the following theorem.
Theorem 4.3.9. If f(x) and g(x) belong to L1(,), then so doestheir convolution
h(x) =
g(u) f(x u) du,
and its Fourier transform H(k) is the product of the Fourier transforms of f
and g : H(k) = F (k) G(k).
Using this theorem and putting all the pieces together, we nd
(3.90) S(k) = e2i(k) = 1 +
s(t) eikt dt, s(t) L1(,).
The same method also gives us, again for k real,
(3.91)1
F (k) F (k) =1
|F (k)|2 = 1 + 0
C(t) cos kt dt, C L1(0,),
where we have used the fact that the function is real and even.
Remark. If there are no bound states, then 1/F (k) is also analytic in
Im k > 0. It follows that in (3.89), B(t) = 0 for t < 0.
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
164 Inverse Problems
4.4. GelfandLevitan Integral Equation
As was previously mentioned, the three ingredients for deriving the Gelfand
Levitan equation are the representation (3.29), the relation (3.31) between the
kernel K and the potential, and the completeness relation (3.44) with (3.43b).
In this last relation, the modulus of the Jost function, |F (E)|, E 0, isgiven in terms of the phase shift by (3.87). The purpose is to determine the
potential from what is commonly called the scattering data:
(4.1) {(k),k 0} {j > 0, Cj > 0, j = 1, 2, . . . , n}
where (k) must satisfy, of course, the Levinson theorem (3.76).
In order to establish the GelfandLevitan integral equation, we start from
the integral transform dened by the complete set of (generalized) eigen-
functions , (3.46), and (3.47). Taking the integral transform of (k, t) =
(k, t) (sin kt/k), which satises the same boundary condition as , namely(k, 0) = 0, we dene, for each xed nite value of t,
(4.2) (r, t) =
(E, t) (
E, r) d(E),
and we are going to show that, as for dened by (3.27), we have (r, t) = 0
for all r > t. To show this, we use (3.64) and (3.43b) in the above integral.
Taking into account that is an even function of k, we nd
(r, t) =1
i
(k, t) f(k, r)k
F (k)dk +
j
Cj (ij , t)(ij , r).
Now, if r > t, we can close the contour in the above integral by a large
semicircle in the upper k-plane, and we know that, because of (3.59), the
contribution of this large semicircle goes to zero as its radius goes to innity.
Therefore, the above integral on the real k-axis is equal to the sum of the
residues of the (simple) poles due to the (simple) zeros of F (k), kj = ij .
On the other hand, we know that, at kj = ij , (ij , r) and f(ij , r) are
proportional to each other. Since is normalized by (k, 0) = 1 for all k, wehave
(4.3) (ij , r) =f(ij , r)
f (ij , 0).
Putting all these pieces together, we nd, for r > t,
(r, t) =n
j=1
[2ij
F (ij)+
Cjf (ij , 0)
](ij , t) f(ij , r),
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 165
where F = dF/dk.
It can be shown (see the appendix) that the sum inside the bracket vanishes
for each j. Therefore,
(4.4) (r, t) =
[(E, t) sin
EtE
](E, r) d(E) = 0, r > t.
In this formula, the integral is conditionally convergent at E = because,when E = k2 , the integrand is oscillating and goes to zero, due to
(4.5) (k, r) sin krk
=1
ko(1).
We replace in (4.4) by its integral representation (3.29) and introduce
(4.6) d(E) = d(E) d0(E).
Relation (4.4) then leads to
(r t)
sinEtE
[sinErE
+
r0
K(r, s)sinEsE
ds
]d(E)
=
sinErE
sinEtE
d(E) +
r0
K(r, s)
[(t s) +
sinEtE
sinEsE
d(E)
]ds = 0.
Introducing the symmetrical kernel
(4.7) G(r, t) =
sinEtE
sinErE
d(E)
we nd, for t < r,
(4.8) K(r, t) +G(r, t) +
r0
K(r, s) G(s, t)ds = 0.
This is the GelfandLevitan integral equation, which gives the full solution
of the inverse problem: nding the potential from the scattering data (4.1).
Given these data, we can calculate the Jost function by (3.87a). We then have
d(E) by (3.43b). Using (3.41a), we obtain
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
166 Inverse Problems
(4.9) d(E) =
1
[1
|F (E)|2 1]
EdE, E = k2 0,n
j=1
Cj (E Ej) dE, E < 0.
from which we can calculate the kernel G by (4.7). Having G, we must solve
the integral equation (4.8) for K, and then we get, nally,
(4.10) V (r) = 2d
drK(r, r).
Remark. The GelfandLevitan integral equation is, for each xed value of
r, a Fredholm integral equation for K in the second variable t, which has the
structure
K(t) +G(t) +
r0
G(t, s) K(s)ds = 0.
In order to show that the scattering data determine the potential in a
unique way, we have to show that the integral equation has a unique solution.
This can be shown to be the case in full generality whatever the scattering
data (4.1) may be, provided that
(i) the phase shift (k) is a continuous function of k and satises the con-
ditions (3.71) and (3.79).
(ii) when bound states are present, the phase shift must satisfy the Levinson
theorem (3.77). Of course, it is implicitly assumed that the number of bound
states is nite.
There are no other constraints on the scattering data. In particular, the
positive quantities j and Cj can be chosen arbitrarily. This means that, given
the phase shift and the binding energies, we obtain a full class of so-called
phase-equivalent potentials V (r;Cj), depending on n arbitrary positive cons-
tants Cj and having all the same phase shift and bound state energies. And
except for the Levinson theorem, there is also no relation between the phase
shift and the binding energies.
Remark. In deriving the integral representation (3.29) and the Gelfand
Levitan integral equation (4.8), we have mainly used the theory of analytic
functions and contour integration in the complex k-plane. An alternative way
of deriving (4.8) is to use the resolvent of the equation (3.29), considered as an
integral equation for (sin kr/k). We refer the reader to the references quoted
at the end for this alternative derivation.
4.4.1. More General Equations. So far, the starting point in our method
has been the free case where the potential V = 0. The radial Schrodinger
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 167
equation was solved in terms of free solutions sine and exponential, and we
obtained the most important ingredient for the solution of the inverse problem,
namely the representation (3.29) in terms of the free solution (0) = (sin kr)/k.
Suppose now we start from a given potential V1 satisfying (3.4), for which
we assume that we can solve the Schrodinger equation completely, and for
which, therefore, we know everything explicitly: the phase-shift for all positive
energies, the energies of all the bound states and the corresponding normaliza-
tion constants, the regular solution 1, the kernel K1 of (3.29), etc. We may
then ask the question of whether it should be possible to start from V1 instead
of starting from the free case V1 = 0, and determine the potential V from the
spectral data (4.1). The advantage of starting from V1 is that this potential
may sometimes be chosen close to V , or reproduce some characteristics of V .
The rst step is to establish an integral representation similar to (3.29) for
in terms of 1 (instead of sin kr/k):
(4.11) (E, r) = 1(
E, r) +
r0
K(r, t)1(E, t)dt.
For this purpose, we proceed along similar lines, and consider the integral
(4.12) (t, r) =
[(E, r) 1(
E, r)
],
1(E, t)d1(E) =
1
i
0
[(k, r) 1(k, r)]
[F1(k)f1(k, t) F1(k)f1(k, t)] kF1(k)F1(k)dk
+
0
[ 1]1 d1(E) = 1i
[(k, r) 1(k, r)]
f1(k, t)F1(k)
k dk +
n1j=1
C(1)j
[(i
(1)j , r
) 1
(i
(1)j , r
)]1(i
(1)j , t
).
Now, in the last integral, if t > r, we can close the contour by a large semicircle
in the upper half-plane, and we know that its contribution is zero. Therefore,
the integral is given by the sum of the residues of the (simple) poles of the
integrand due to the (simple) zeros of F1(k). Therefore, using (4.3) for 1,
(4.13) (t, r) =j
[(i
(1)j , r
) 1
(i
(1)j , r
)],
2i(1)jF1(i
(1)j
) + C(1)jf 1(i
(1)j , 0
) f1 (i(1)j , t) = 0, t > r.Do
wnl
oade
d 11
/22/
13 to
190
.144
.171
.70.
Red
istrib
utio
n su
bject
to SIA
M lic
ense
or co
pyrig
ht; se
e http
://www
.siam
.org/j
ourna
ls/ojs
a.php
168 Inverse Problems
If we invert now the integral transform (4.12) dening (t, r), we indeed get
(4.11), where K(r, t) (t, r). After establishing (4.11), it is easy to establishthe generalized GelfandLevitan integral equation by proceeding exactly in
the same way we established (4.8). This time, we consider
(4.14) (r, t) =
[(E, t) 1(
E, t)
](E, r) d(E), r > t,
and we again nd that (r, t) = 0. Again, as in (4.4), the integral is condi-
tionally convergent at E = since, as E , the integrand is oscillatingand goes to zero because of
(4.15) (k, t) 1(k, t) = 1ko(1).
We now replace by its integral representation (4.11) and dene
(4.16) d(E) = d(E) d1(E)
and
(4.17) G(r, t) =
1(E, r) 1(
E, t) d(E).
We then get, as for (4.8), the desired integral equation
(4.18) K(r, t) +G(r, t) +
r0
K(r, s) G(s, t)ds = 0.
The same analysis which led to (3.31) for the potential leads here to
(4.19)2K
r2
2K
t2= [V (r) V1(t)]K(r, t)
and
(4.20) V (r) V1(r) = 2 ddrK(r, r).
Making V1 = 0, we nd our previous results. Note that in the present case, V
and V1 may have a dierent number of bound states. We shall now indicate
some applications.
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 169
4.4.2. Identical Continua. Here we have d(E) d1(E) for E 0. In(4.16), we are left then with a nite sum of 2n -functions. The kernel G is
then a degenerate kernel of nite rank, and we can then solve the Gelfand
Levitan integral equation algebraically. An interesting instance is when V1 has
no bound states, whereas V has n. The simplest example is when n = 1 : V
has a bound state with energy 2 and the normalization constant is C. If wedene
(4.21) R(r) = 1 +
r0
C21(i, s) ds
then
(4.22) V V1 = 2 d2
dr2Log R(r)
and
(4.23) (k, r) = 1(k, r) C 1(i, r) r0
1(i, t)1(k, t) dt1
R(r).
Making k = i in this relation, we nd that the wave-function of the bound
state is
(4.24) (i, r) =1(i, r)
R(r),
where R is given by (4.21). It is easily checked here that C is indeed the
normalization of the bound state wave-function. We should notice here that
1(i, r) e2r as r . This, when used in (4.23) and (4.21), shows that(i, r) er, as expected. The next simple case is described in 4.4.3.
4.4.3. Phase-Equivalent Potentials. This is a particular case in which we
also have the same number of bound states in V and V1, with identical binding
energies. The two potentials dier only by the normalizing constants Cj s, but
have the same Jost function.
A more interesting and more general case, and still soluble explicitly in
terms of 1 and F1, is the following.
4.4.4. Bargmann Potentials. Bargmann potentials attached to a given
potential V1 are the potentials for which the Jost function is of the form
F (k) = (rational function of k) F1(k),
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
170 Inverse Problems
where F1 is the Jost function of V1. Since F () = F1() = 1, it is obviousthat the rational function has the same number of zeros and poles, so that
(4.25) F (k) =Nj=1
(k + i ajk + i bj
)F1(k).
Since both F1 and F are holomorphic in Im k > 0 and continuous on Im k = 0,
we must assume Re bj > 0. Also, either aj < 0, which means that V has a
bound state with energy a2j , which V1 does not have (if V1 had also this boundstate, then F would have a double zero at k = iaj , and this is forbidden, as
we know), or we must have Re aj > 0.
The problem here is, knowing everything about V1 (scattering data, the
wave-function 1(E, r), etc.) and the product in the righthand side of (4.27),
how does one calculate V . Because of the symmetry property (3.66) of the
Jost function, complex as and bs appear always in pairs (a, a) and (b, b).To show the essence of the method, we shall treat here the simple case where
we have one a and one b, both real and positive: no new bound state, but one
more zero and one more pole, both in Im k < 0. We also assume that V1 has
no bound states itself, for this would not alter the result. Therefore
(4.26) F (k) =k + ia
k + ibF1(k), a > 0 , b > 0.
According to (4.22), we have
(4.27) G(r, t) =2
0
1(k, r)
(b2 a2k2 + a2
)1(k, t)
k2
|F1(k)|2dk.
It is now easy to recognize that the above expression is the resolvent kernel of
the Schrodinger equation (3.9) with V1 and energy a2. Indeed, if we applythe dierential operator D1 = (d2/dr2) + V1 + a2 to the integral and use thecompleteness relation for 1(E, r), we obtain
(4.28) D1 G(r, t) = (r t).
Moreover, G is symmetric in its arguments, and G(0, t) = 0 satises the Dirich-
let boundary conditions. It is easily checked that one has
(4.29) G(r, s) =1 (ia r)
F1(ia)(b2 a2) ,
where f1 is the Jost solution. The kernel G is now a separable kernel of rank
one, and the GelfandLevitan equation is trivially solved, as was done above,
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 171
and we can compute K(r, t) and the potential V explicitly in terms of 1 and
f1.
It can be shown that in the general case where we have N poles and N
zeros, the kernel G reduces to a sum of N terms analogous to (4.29), and again
we can solve algebraically the integral equation, and get explicitly V .
4.4.5. The Discrete Case. So far, we have assumed that the potential van-
ishes at innity. This leads to the existence of the continuum in the energy
spectrum, and therefore to scattering. Suppose now that the potential is in-
nite at innity: V () = +, for instance, V (x) r, > 0, > 0, asr . We have now what is called a conning potential, and whatever theboundary condition at r = 0 (Dirichlet, Neumann, or mixed), the spectrum is
discrete and goes to + : E1 < E2 < < En < . This case hasattracted much interest because of its applications in particle physics (quark
models where the interactions between quarks is known to be conning, and
where, at least for heavy quarks, the nonrelativistic Schrodinger equation is
a good approximation), and other domains such as acoustics, etc. A similar
equivalent case is when we are dealing with the Schrodinger equation in a box,
i.e., in a nite interval. Here also, if the potential has no strong singularities
in the interval, it is well known that the spectrum is discrete whatever the
boundary conditions at the ends of the interval are. And again the eigenval-
ues are nondegenerate and accumulate at innity: En as n . Theinverse problem is now to nd the potential from the energy spectrum and
the appropriate normalization constants of the eigenfunctions. This problem
is treated in Chapter 3.
4.4.6. Properties of the Potential. It is clear from the techniques used
to establish the GelfandLevitan integral equation that Fourier analysis plays
an essential role. Moreover, when the potential satises condition (2.20), most
of the quantities of interest we have to use in order to get the potential from
the scattering data are given by various Fourier integrals of the L1 function,
for instance, (3.81), (3.82), (3.85), (3.86), etc. Here we are going to show that
it is possible to obtain precise properties of the potential from the properties
of these Fourier transforms.
As we saw before, from the integral representation (3.81) and (3.82) for
the Jost function it is possible to obtain all the other integral representations
(3.85), (3.86), etc. Let us therefore consider the Jost function dened by (3.69)
and (3.81):
(4.30) F (k) = 1 +
0
eikr V (r) (k, r)dr,
(4.31) F (k) = 1 +
0
eikt A(t)dt, A(t) L1(0,).
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
172 Inverse Problems
In (4.30), we can replace by its series expansion studied in 3.1, moreprecisely, by the sequence (n) given by (3.15b). In this way we get a sequence
of F (n) which is absolutely and uniformly convergent in Im k 0, the domainwhere F (k) is dened and continuous. We know from (3.70) that the integral
in (4.30) goes to zero as k and the same is of course true for the integralin (4.31). More precisely, using (3.23) and (3.24) in (4.30), we immediately see
that, for k , the main terms of the expansion of F are the rst two terms:
(4.32) F (k) =
(1 +
0
eikr V (r)sin kr
k
)(1 + o(1)) ,
o(1) meaning here a constant times I(k,), and
(4.33) I(k,) = 0
r|V (r)|1 + |k|r dr 0, k ,
as we saw with (3.21).
Let us introduce now the integral of V :
(4.34) W (r) = r
V (t) dt.
By denition, since V is locally L1, except perhaps at the origin, and because
of our assumption on the potential, (2.20), W is an absolutely continuous
function of r for r > 0, and also W L1(0,) since
(4.35) 0
|W |dr 0
dr
r
|V (t)|dt = 0
|V (t)|dt t0
dr =
0
t|V (t)|dt
Inverse Problems in Potential Scattering 173
(4.38) A(r) L1(0,)W (r) L1(0,).
Other relations of this kind can be obtained along the same lines. For
instance, it is clear from (4.37) and (3.86) that one also has
(4.39) g(r) L1(0,)W (r) L1(0,).
Let us also note that we get from (4.37) and (3.86)
(4.40) (k) = 0
W (r) sin 2kr dr + , k .
In order to go further and establish (3.4), one has to dierentiate the
above quantities A(r), g(r), etc., since, by denition, V (r) = W (r). It is thenpossible to show rigorously that
(4.41) r A(r) L1(0,) r V (r) L1(0,)
and
(4.42) r g(r) L1(0,) r V (r) L1(0,).
These equivalence relations provide necessary and sucient conditions to
be imposed on the Fourier transforms of the phase-shift or the Jost function
in order to obtain a potential which satises (3.4).
We conclude this section by showing how Tauberian theorems can lead
from the properties of the potential near the origin to the properties of the
Jost function and the phase-shift near k = . We need the following twotheorems.
Theorem 4.4.1. Let f(x) = x(x), where 0 < < 1 and is ofbounded variation in (0,). Then
0
(x) cos kx dx
(+0) (1 ) sin 2 k1, (k ),
() (1 ) sin 2 k1, (k 0).The Fourier sine transform satises similar conditions with sin 2 replaced by
cos 2 .
Theorem 4.4.2. Let f(x) and f (x) be integrable over any nite intervalnot ending at x = 0 ; let x+1f (x) be bounded for all x, and let f(x) xas x (respectively, x 0). Then
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
174 Inverse Problems
0
f(x) cos kx dx (1 ) sin 2k1
as k 0 (respectively, k ). The Fourier sine transform satises similarconditions with sin 2 replaced by cos
2 .
As an application, let us assume that
(4.43) limr0
r1+ V (r) = V0, 0 < < 1.
We then nd from (4.40) that
(4.44) (k) V0
cos
2(1 )(2k)1, k .
These asymptotic properties can be formulated in terms of g(r) and A(r),as well as in terms of W (r), A(r), and g(r). One should note that the above
two theorems are two-sided, so that (4.44) implies (4.43).
Another result we can get from the above analysis concerns the potentials
of nite range: V (r) = 0 for r > R. It can be seen very easily on the basis of
(4.30) and (3.24) or (3.29) that the Jost functions then becomes an exponential
function (order 1) of type 2R, so that, by the PaleyWiener theorem, we have
in (4.31)
A(t) = 0, r > 2R.
Conversely, it can also be shown easily that if the scattering data are such that
the Jost function is of exponential type 2R, then V (r) = 0 for r > R.
4.5. Marchenko Equation
An equivalent method for solving the inverse problem is to use the Jost solution
f(k, r) instead of the regular solution . Now, the Jost solution, for each xed
value of r, is an analytic function of k in Im k > 0 and is continuous and
bounded on the real k-axis. Moreover, it has the asymptotic behavior shown
in (3.61) and we have seen that we have the representation (3.80),
(5.1) f(k, r) = eikr +
r
A(r, t)eiktdt,
where A(r, t) is, for each r xed, an L2 function of t.
Like the representation (3.29) for which was the starting point for the
GelfandLevitan method, the above representation is the essential ingredient
of the Marchenko method. To go further, we must check, as for (3.29), that
we indeed have a solution of the Schrodinger equation. Proceeding as in 3,we end up with
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 175
(5.2)2A
r2
2A
t2= V (r) A(r, t)
and
(5.3) V (r) = 2 ddrA(r, r).
The discussion about the existence of the solution of these equations is
quite similar to that for (3.30), and we reach similar conclusions. Also, using
(5.1) in (3.50) and taking the Fourier transform of both sides, we get the
integral equation
(5.4)
A(r, t) =1
2
r+t2
V (s)ds r+t2
ds
tr2
0
V (s u) A(s u, s+ u)du, t r.
One can directly check that this integral equation leads to (4.2) and (4.3).
Iterating (5.3), we get
(5.5) |A(r, t)| < 12
r+t2
|V (s)| ds exp[
r
u|V (u)| du],
which is analogous to (3.35). We get again that if rV L1(0,), we aredealing with meaningful equations and can replace solving the Schrodinger
equation by solving (5.2) and (5.3), or (5.4), and then using (5.1) to get f(k, r).
Also, making r = 0 in (5.5), we see that A(t) = A(0, t) is L1(0,), as wasshown by a dierent method in 3. Thus A(t) L1 L2.
We now have to formulate the completeness relation (3.44) in terms of the
Jost solution f . This is most easily done by replacing in its continuum part,0 d(E), (E, r) by (3.64). We get then two terms. Changing k into
k in the second integral, we get, remembering that |F (k)|2 = F (k) F (k),
1
i
F (k)k
f(k, r)k2
F (k) F (k)(k, t)dk.
We now replace (k, t) again by (3.64) and use the denition of the S-matrix
(3.88), S(k) = F (k)/F (k), k real. The nal result is that (3.44) can bewritten as
(5.6)
1
2
f(k, r) [f(k, t) S(k) f(k, t)] dk +n
j=1
Cj j(r)j(t) = (r t),
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
176 Inverse Problems
where j (ij , r) are the bound states wave functions. Expressing this sumin terms of fj(r) = f(ij , r) is easy. We have, since (k, 0) = 1 for all valuesof k (remember (3.11)),
(5.7) j(r) =f(ij , r)
f (ij , 0).
Using now (see the Appendix)
(5.8) C1j = 0
2j (r)dr =1
(f (ij , 0))2
0
f2j (r) dr =iF (ij)
2jf (ij , 0),
the sum in (5.6) can be written
(5.9)n
j=1
( 2ijF (ij)f (ij , 0)
)fj(r) fj(t) =
j
sj fj(r) fj(t).
We must notice here that fj(r), which has the precise asymptotic behavior
(5.10) fj(r) = ejr + , r ,
is real because of (3.60), and therefore that sj > 0. The righthand side of (5.8)
gives us
(5.11) sj
0
f2j dr = 1,
which is the exact analogue of (3.42).
Consider now (5.6) for r < t, and assume rst for simplicity that there are
no bound states. We then get
f(k, r) [f(k, t) S(k) f(k, t)] dk = 0.
If we now replace the Jost solutions f(k, t) and f(k, t) inside the bracket bytheir integral representation (5.1), we get
(5.12)
f(k, r) [eikt S(k) eikt] dk
+
t
A(t, u)
{
f(k, r) [eiku S(k) eiku] dk}du = 0.
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 177
With obvious notations, this is, for each xed r(< t), a homogeneous integral
equation of the form
(5.13) Gr(t) +
t
A(t, u) Gr(u)du = 0,
where A(t, u) L2(t,) in u. In other words, for each xed r, Gr(t) fort > r satises a homogeneous Volterra integral equation with an L2 kernel.
Therefore, from the theory of Volterra integral equations, we infer that Gr(t) =
0, that is, given r, we have
(5.14)1
2
f(k, r) [eikt S(k) eikt] dk = 0, for all t > r.
We have been reasoning as though Gr(t) in (5.13) were an ordinary func-
tion, which is not the case. We are dealing here with a distribution, but with
a little care the conclusion can be shown to hold. Since the starting point was
the completeness relation (3.44), or (5.6), which has to be understood in the
framework of the theory of distributions, likewise (5.14) has to be understood
in the same sense. Given an appropriate function F (t) which is C and whosesupport is outside [0, r], we have
0
F (t) Gr(t) dt = 0.
When bound states are present, we also must use
(5.15) fj(t) = f(ij , t) = ejt + r
A(t, u) eju du;
the sum over the bound states will be included in the denition of Gr(t) and
we get again (5.13). The full result is then
(5.16)1
2
f(k, r) [eikt S(k)eikt] dk +n
j=1
sj fj(r) ejt = 0 , r < t .
If we replace the Jost solution by its representation (5.1), we obtain the
Marchenko integral equation
(5.17) A(r, t) +A0(r + t) +
r
A(r, u) A0(u+ t) ds = 0, t r,
where
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
178 Inverse Problems
(5.18) A0(t) =1
2
[S(k) 1] eiktdk + sj ejt = s(t) +n
j=1
sj ejt,
with s(t) being the Fourier transform of S1, (3.90). Since S 1 as k ,s(t) is a well-dened bounded function and is L1(,), as we saw before.
The above equations are analogous to the GelfandLevitan equation. Each
one has its own advantages. It turns out that Marchenko equation is more use-
ful in some cases, for instance, when dealing with the one-dimensional problem
on the full line (,). Here there is no origin and we cannot use boundaryconditions at some nite point. We shall see more details in the next section.
Remark. It can be shown that the Marchenko equation has a unique solu-
tion, whatever s(t) L1(,) and positive numbers sj and j are, providedthat the Levinson theorem is satised. Moreover, as we saw before, according
to (4.41), 0
r|V |dr
Inverse Problems in Potential Scattering 179
(6.2) (k, x) =
tR(k) eikx, x +,
eikr + rL(k) eikx, x .
tR(k) is the transmission coecient to the right, rL(k) is the reection coe-
cient to the left. In the second formula, eikx represents the continuous beam
of incoming particles from the left, propagating to the right.
Likewise, we have the solution + which corresponds at t = to in-coming particles from the right, propagating to the left: eikx. Part of it isreected back to the right at t = and another part goes through to the leftand is found at t = + at x = . The asymptotic behavior of + is
(6.3) +(k, x) =
tL(k) eikx, x ,
eikr + rL(k)eikx, x +.
Since we have independent asymptotic regions x = , we are in fact dealingwith a kind of two-channel problem. The S-matrix is
(6.4) S(k) =
s11(k) s12(k)s21(k) s22(k)
= tR(k) rL(k)
rR(k) tL(k)
.When the potential is real (elastic scattering = conservation of the number
of particles or probabilities), the S-matrix is unitary. Also, because of time
reversal invariance, one can show that
(6.5) s11(k) = tR(k) = s22(k) = tL(k).
The transmission coecients to the left of and to the right of are equal.
All of these will be shown below.
The solutions + and are the physical solutions and are similar to thephysical solution of the radial case, (3.1) and (3.8). We can now dene the
Jost solutions of (6.1), f(k, x), by precise asymptotic conditions at x = :
(6.6a) limx e
ikxf+(k, x) = 1,
(6.6b) limx e
ikxf(k, x) = 1,
and similar conditions for the derivatives, as in (3.49).
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
180 Inverse Problems
In the radial case, we assumed that the potential satises (3.4). In the
present case, since the origin x = 0 does not play any important role, it
turns out that the condition which would guarantee the existence of all these
solutions, and therefore a good scattering theory, is
(6.7)
(1 + |x|) |V (x)| dr 0. It there satises the bound
(6.9) |f+(k, x) eikx| < C eIm kx
1 + |k| x
(1 + |u|) |V (u)| du.
The same is true for f, again in Im k 0, where we have the bound
(6.10) |f(k, x) eikx| < C eIm kx
1 + |k| x
(1 + |u|) |V (u)| du.
From the above bounds, again using the Titchmarsh theorem (Theorem
4.3.6), we obtain, in complete agreement with the radial case, the integral
representations
(6.11a) f+(k, x) = eikx +
x
+(x, y) eiky dy,
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
Inverse Problems in Potential Scattering 181
(6.11b) f(k, x) = eikx + x
(x, y) e+iky dy,
valid in Im k 0, where both + and are square integrable in y on (x,)for each xed value of x. The rst representation is more useful for x > 0 and
the second for x < 0.
If we now replace (6.11a) and (6.11b) in (6.8a) and (6.8b) and take the
Fourier inverse transform, we obtain integral equations for + and quitesimilar to (5.4), and we can again deduce bounds similar to (5.5). Writing that
f+ and f are solutions of the Schrodinger equation now leads to
(6.12)
(2
x2
2
y2
)(x, y) = V (x) (x, y),
together with the boundary conditions
(6.13) +(x, x) =1
2
x
V (t) dt,
(6.14) (x, x) =1
2
x
V (t) dt,
and
(6.15) (x,) = 0.
The above partial dierential equation can be analyzed as in Chapter 3.
Together with the conditions (6.13)(6.15), we can show again that there exists
a unique solution (x, y), which, when used in (6.11a) or (6.11b), gives usthe appropriate solution of the Schrodinger equation.
We now must nd the relations between the solutions f and the physicalsolutions . First of all, we must notice that from f+ alone, as was done forthe Jost solutions in the radial case, we can obtain a complete set of solutions
of (6.1) by taking f+(k, x). Indeed, from the asymptotic condition (6.6a), weget the Wronskian
(6.16) W [f+(k, r), f+(k, r)] = 2ik,
which is dierent from zero for k = 0. Likewise for f we get
Dow
nloa
ded
11/2
2/13
to 1
90.1
44.1
71.7
0. R
edist
ribut
ion
subje
ct to
SIAM
licen
se or
copy
right;
see h
ttp://w
ww.si
am.or
g/jou
rnals/
ojsa.p
hp
182 Inverse Problems
(6.17) W [f(k, x), f(k, x)] = 2ik.
To these relations, we must also add
f+(k, x) = (f+(k, x)) ,
f(k, x) = (f(k, x)) ,(6.18)
which are obvious on (6.8a) and (6.8b) since we assume the potential V to be
real.
From the denition of the physical solutions and their asymptotic be-haviors at x = , we have
(6.19) (k, x) = s11(k) f+(k, x) = f(k, x) + s12(k) f(k, x)
and
(6.20) +(k, x) = s22(k) f(k, x) = f+(k, x) + s21(k) f+(k, x).
Using the Wronskians (6.16) and (6.17), we obtain
(6.21)2ik
s11(k)= W [f(k, x), f+(k, x)] ,
(6.22) 2iks12