+ All Categories
Home > Documents > Sredingerova jednacina

Sredingerova jednacina

Date post: 10-Sep-2014
Category:
Upload: dusan-stosic
View: 223 times
Download: 2 times
Share this document with a friend
Popular Tags:

If you can't read please download the document

Transcript

Copyright 2000 Arne Bergstrm, B&E Scientific Ltd, and Claudia Eberlein, University of Sussex.

Worksheet 5 - Solution of the Schrdinger equation for a general potential

Instructions for use of this worksheet

This Maple worksheet has been written for Maple 6.01 and has been tested both under Windows98 and under Linux. To run the worksheet you just need to place the cursor anywhere in the first command group below (printed in red and starting with restart) and press to progress from one command group to the next. If there are animations you will be given instructions on how to play them in the text directly above them.

If you are not familiar with Maple then read the text (printed in black, like this line), ignore the Maple commands (printed in red), and just look at the output of Maple's calculations (displayed plots or formulae printed in blue). If you know Maple then you are encouraged to read also the Maple commands and try out what happens when you change them. However, you are still advised first to go through the whole worksheet without changing anything, so that you understand the aim of the calculation.

Start by putting the cursor anywhere in the command group listed below, and then press .

> restart: with(plots):

interface(showassumed=0,imaginaryunit=i):

assume(x,real): assume(t,real):

assume(m>0): assume(h_>0); assume(E psi(x,t)=exp(-i*t*E/h_)*psi(x);

The time-independent part is called the stationary wave function or energy eigenfunctionand it satisfies the stationary Schrdinger equation.

> Eq(A2):=E*psi(x)=subs(psi(x,t)=psi(x),rhs(Eq(A1)));

In section B we shall discuss why is called an "eigenfunction" and what is meant by this. Now we want to ascertain a few general properties of . We already know that has to be normalisable, which means that the integral over its modulus square must be finite.

> int(abs(psi(x))^2,x=-infinity..infinity) subs( steepness=10 ,arctan(steepness*(x-1))/Pi+0.5): plot({%,diff(%,x)},x=0...2);

Much the same applies to the first derivative of

.

(ii)

has to be continuous for all x for that the potential V(x) is finite.

If the first derivative of were discontinuous, then the second derivative would be such an infinitely high and infinitely sharp spike as we have just discussed. Such a spike appearing on the right-hand side of Eq(A2) could not equal anything on the left-hand side, and thus the Schrdinger equation could not be satisfied.

However, in a few very exotic cases the spike from the second derivative of could be cancelled by a likewise infinite spike in the potential V(x), and the Schrdinger equation could nevertheless be satisfied. An example of this we have in fact already dealt with, namely, the "box" with infinitely high walls which we studied in Worksheet 3. The wave function (blue line in the graph below) of the ground state in the box is one lobe of a sine function between the walls, and zero outside the box. This means that the first derivative of the wave function (green line) is a cosine inside and zero outside the box and thus has a discontinuity at the walls. However, since the walls are infinitely high, V(x) is infinite at the walls and could be made to cancel the divergence brought in by on the right-hand side of the Schrdinger equation, though in practice one often doesn't consider the problem, but just restricts oneself to the inside of the box.

> pa1:=plot(6-6*Heaviside(4-x)*Heaviside(x),x=-0.5..4.5,V=2.5..5,tickmarks=[[4=`L`],[0=`0`]],labels=[`x `,V],colour=red,thickness=3):

pa2:=plot(piecewise(x0 and x4,0.05),x=0.5..4.5,colour=blue,thickness=3): pa3:=plot(piecewise(x0 and x4,-0.05),x=0.5..4.5,colour=green,thickness=3): display(pa1,pa2,pa3);

One of the very rare examples where one does have to consider the discontinunity in because it has physical significance, is a potential V(x) with an infinite spike (red line in the plot below). Such a potential has exactly one bound state (that is a state that is localised - see section B), and the wave function (blue line) has a discontinuity in its first derivative at the position of the spike. For anyone who wonders why one would want to use such a weird kind of potential: the thing is that once one has got the hang of dealing with such spikes and their infinities, a spike potential is an easy and very useful model for technically much more complicated potentials.

> pa4:=plot(-0.001/(x^2+0.03^2),x=-5..5,thickness=2): pa5:=plot(piecewise(x0,exp(-x)),x=-5..5,colour=blue): display(pa4,pa5);

In summary, for finite potentials V(x) both the wave function and its first derivative have to be continuous for all x. Note that this is true whenever V(x) is finite. It does not depend on whether V(x) is continuous or not. All that happens wherever the potential is discontinuous is that discontinuous, but this does not affect the continuity of and . is

In fact, discontinuous potentials are the most common examples of potentials in quantum mechanics, because, as we shall show in section C, for them one can solve the Schrdinger equation incredibly easily by solving it in separate regions and then just piecing together the solutions at the boundaries between neighbouring regions.

B. The meaning of eigenvalues

If we want to know what a quantum particle does in a particular potential V(x), we have to solve the Schrdinger equation Eq(A2).

> Eq(A2);

This is a second-order ordinary differential equation. Depending on the potential it can be easy or difficult to solve. Apart from being a differential equation, it is also called an eigenvalue problem. Let's go into this a little. As we explained at the end of Worksheet 1, the right-hand side of the Schrdinger equation can also be understood as an operator H applied to the wave function. H is called the Hamilton operator.

> Eq(B1):=H=-h_^2/2/m*Diff(` `,x$2)+V(x);

In this notation the stationary Schrdinger equation is simply

> Eq(B2):=E*psi(x)=H*psi(x);

H on the right-hand side is called an operator because it operates on the wave function , that is to say, it does something to , in this case differentiating it twice in one term and multiplying it by V(x) in the other term. However, the left-hand side of the Schrdinger equation Eq(B2) is just times a number. So, applying H to on the right-hand side should be the same as a number E times on the left-hand side? Well, for most functions the left- and right-hand sides of Eq(B2) won't be the same; only very special functions will satisfy the equation. The German mathematicians who first considered these types of problems in detail chose to call such functions the "proper functions" of the operator equation and the constant of proportionality E on the left-hand side they called the "proper value". The German words they used were "Eigenfunktion" and Eigenwert". In English only the second half of each of those words got translated, and hence we now call the function that satisfies Eq(B2) an eigenfunction of the operator H, and the constant E we call an eigenvalue of H. These eigenvalues are the allowed energy levels of the system, e.g. the energy levels in an atom. Each eigenfunction yields a specific eigenvalue on the right-hand side of Eq(B2), though it can happen that two (or more) eigenfunctions have the same eigenvalue. So, a system has a set of energy eigenvalues E, which are its allowed energy levels, and each eigenvalue belongs to one or more eigenfunctions .

> Eq(B3):=E*psi[E](x)=H*psi[E](x);

Let's look at an example. Suppose we have a Lorentzian peak as wave function. It satisfies all the requirements of a good wave function: it falls off towards infinity, it is normalised, and both it and its first derivative are continuous for all x.

> Eq(B4):=psi(x)=sqrt(2/Pi)/(1+x^2); plot(sqrt(2/Pi)/(1+`x `^2),`x `=-5..5);

When we apply the first part of the Hamilton operator H, Eq(B1), to this wave function, we see that what comes out on the right-hand side looks quite different from the original Lorentzian peak.

> Eq(B5):=-h_^2/2/m*diff(psi(x),x$2)=h_^2/2/m*sqrt(2/Pi)*simplify(diff(1/(1+x^2),x$2));

plot(subs(m=1,h_=1,rhs(%)),x=-5..5);

However, to be able to test whether the in Eq(B4) is an eigenfunction we have to know the whole Hamiltonian H. Let's consider a quadratic potential.

> V(x)=`x `^2/2; plot(rhs(%),`x `=-5..5);

Now we apply the complete Hamiltonian as written on the rhs of Eq(B1) to our choice of wave function and plot the result (red line) jointly with the wave function (green line).

> rhs(Eq(B1))*psi(x)=rhs(Eq(B5))+1/2*x^2*rhs(Eq(B4)); plot({subs(m=1,h_=1,rhs(%)),rhs(Eq(B4))},x=-5..5,colour=[red,green]);

is clearly not proportional to , and hence of Eq(B4) is not an eigenfunction of the Hamiltonian with a quadratic potential. Well, it wasn't difficult to find a function that is not an eigenfunction - because most functions are not.

If we had chosen a Gaussian wave function instead of the Lorentzian, we would have found that indeed proportional to - but only if we had picked the right coefficient in the exponent.

is

> psi(x)=N*exp(-sqrt(m)*x^2/2/h_); subs(V(x)=x^2/2,rhs(Eq(B1)))*psi(x)=simplify(h_^2/2/m*diff(rhs(%),x$2)+x^2/2*rhs(%));

So, we have seen that only in very few cases the application of the Hamilton operator H to the wave function gives something that is proportional to , and hence only few special choices of functions turn out to be eigenfunctions of H. So, it is no good trying to guess an eigenfunction; one finds eigenfunctions by solving the differential equation Eq(A2), the stationary Schrdinger equation. Section C demonstrates how this is done for the example of a square-well potential.

Finally, let's briefly discuss what kind of energy eigenstates there are. Consider, for example, the following potential (red line) and three particles with different energies (green, brown, and blue lines), which are representative of the three different types of energy eigenstates one can get.

> d1:=plot((4*x^2-2)*exp(-x^2),x=-4..4): d2a:=plot(-1.0,x=4..4,colour=brown): d2b:=plot(1.0,x=-4..4,colour=blue): d2c:=plot(0.5,x=-4..4,colour=green): display(d1,d2a,d2b,d2c);

The three types are:

(i) Bound state (brown line)

(Energy E lies between -2 (bottom of the potential and 0).

The kinetic energy E-V(x) is positive only in a finite region (here for E=-1.0 between about x=-0.5 and x=0.5). A classical particle could not move at all outside this region. A quantum particle has a certain probability to be found outside the classical region, but this probability diminishes exponentially away from the classically allowed region. Hence the particle cannot escape the vicinity of the classically allowed region, it is bound. Bound states have discrete energy values. That is to say, that only certain energies occur, and all others do not. (The example in Section C will deal with bound states and show why the energy can take only certain allowed values.)

(ii) Travelling state (blue line)

(Energy E lies anywhere above 0.9 (top of the potential hump)).

The kinetic energy E-V(x) is positive for all x, and hence the particle can move everywhere. The energy E is a continuous variable, just like for a classical particle. Examples of travelling states were the plane-wave states of a free particle that we looked at in Worksheet 4.

(iii) Tunnelling states (green line)

(Energy E lies between 0 and 0.9 (top of the potential hump)).

The kinetic energy E-V(x) is positive in some regions and negative in others, but at least one of the classically allowed regions stretches all the way to + or - , and one or more of the classically allowed regions are enclosed by classically forbidden regions, so that superficially it looks as if the particle could be held in a bound state by the potential. In the example plotted above, a classical particle with an energy E between 0 and 0.9 could not cross any of the barriers. It would stay either to the left of x=1.2, or between x=-1.2 and x=1.2, or to the right of x=1.2. However, quantum particles behave differently: they can tunnel through the barriers, though tunnelling is the slower the higher and thicker the barrier is. If a particle is placed between the two barriers in the above example, it will stay there in a quasi-bound state whose life-time depends on the barriers, but it is certain to tunnel out eventually. The energy E for tunnelling states is continuous, just as for travelling states, but quasi-bound states live longer at some energies than at others.

C. Example: Bound states in a square well

As an example we want to consider a square-well potential. This may not seem a particularly realistic choice, but it keeps the mathematics relatively simple.(Bear in mind that we have to solve the Schrdinger equation Eq(A2) and that that is a second-order differential equation. If the potential V(x) is anything but a very simple function then one just cannot solve that equation, or only numerically.) We also want to restrict ourselves to bound states. Travelling states we will deal with in Worksheet 6.

So, we choose a potential which equals a constant -V in a region of length L, between x=-L/2 and x=L/2, say, and which is zero outside this region. As we discussed in Section A, the potential can have discontinuities, i e jump suddenly from one value to another, without any dire consequences for the wave function or for the quantum mechanics of the particle.

> Eq(C1):=V(x)=piecewise( x dsolve(Eq(C2),psi[`I,III`](x));

But we know that wave functions have to fall off towards infinity because otherwise they cannot be normalised. Hence in region I we have only exponentials with positive arguments, and in region III only exponentials with negative arguments.

> Eq(C3):=psi[I]=C*exp(sqrt(-2*m*E)*x/h_); Eq(C4):=psi[III]=D*exp(sqrt(-2*m*E)*x/h_);

So, the wave functions outside the well look something like this:

> d3:=plot(exp(x),x=-6..-3,0..0.08,tickmarks=[[3=`L/2 `,-3=`-L/2`], [0=`0`]]): d4:=plot(exp(-x),x=3..6): display(d3,d4);

Inside the well, the potential V(x) equals the negative constant -V. Hence the Schrdinger equation turns into:

> Eq(C5):=subs(psi(x)=psi[`II`](x),V(x)=-V,Eq(A2));

Since E+V is positive, the solutions of this differential equation are oscillatory. (Again, verify for yourself that sine and cosine are solutions.)

> dsolve(Eq(C5),psi[`II`](x));

This time we cannot rule out some of the solutions on physical grounds, but we have to consider both. We'll first look at the cosine solution, and then at the sine. However, physicists tend to have a fancier method of labelling them than just calling them "sine- and cosine-solutions"; they label them by parity. A solution of even parity is one for which the wave function at positive x(red line) is the mirror-image of the wave function at negative x (green line), i e for which . The cosine is an even-parity solution.

> Eq(C6):=psi[`II,even`](x)=A*cos(sqrt(2*(E+V)*m)*x/h_); d5:=plot(cos(x),x=0..4,tickmarks=[[],[]]): d6:=plot(cos(x),x=4..0,tickmarks=[[],[]],colour=green): display(d5,d6);

Now we have to sew together the solutions in the three regions, Eq(C3) in region I, Eq(C6) in region II, and Eq(C4) in region III. First, we have to demand that the wave function is continuous at the joining points. The continuity of at x=-L/2 gives Eq(7a), and at x=L/2 it gives Eq(7b).

> rhs(Eq(C3))=rhs(Eq(C6)): Eq(C7a):=subs(x=-L/2,%); rhs(Eq(C4))=rhs(Eq(C6)): Eq(C7b):=subs(x=L/2,%);

Second,we have to require

to be continuous, which gives Eq(7c) at x=-L/2 and Eq(7d) at x=L/2.

> diff(rhs(Eq(C3)),x)=diff(rhs(Eq(C6)),x): Eq(C7c):=subs(x=-L/2,%); diff(rhs(Eq(C4)),x)=diff(rhs(Eq(C6)),x): Eq(C7d):=subs(x=L/2,%);

The two equations in each pair are identical, except for the coefficients C and D. Hence both pairs dictate that we must have C=D.

In order to find an equation for the allowed energies, we divide Eq(7c) by Eq(7a), or Eq(7d) by Eq(7b), which gives the same.

> Eq(C8):=lhs(Eq(C7c))/lhs(Eq(C7a))=rhs(Eq(C7c))/rhs(Eq(C7a));

Eq(C8) is a nonlinear equation for the energy E, and in principle we could just take it as it is and solve it numerically to find all possible E between -V and 0. However, we can figure out analytically how many solutions there are and roughly where they are. To this end we make the equation look less daunting by introducing two abbreviations:

> Eq(C9):=k=sqrt(2*(E+V)*m)/h_; Eq(C10):=kappa=sqrt(2*(-E)*m)/h_;

In terms of these abbreviations Eq(C8) reads simply:

> Eq(C11):=kappa=tan(k*L/2)*k;

However, Eq(C8) had only one unknown, namely E, but because we have introduced two abbreviations that both contain E, Eq(C11) now contains two unknowns, k and . This means that Eq(C11) alone is not enough; we also have to take into account the interdependence between k and . If we take the squares of Eq(C9) and of Eq(C10) and add them, we find a convenient relation that connects k and .

> Eq(C12):=kappa^2+k^2=simplify(rhs(Eq(C9))^2+rhs(Eq(C10))^2);

So, we have to solve Eq(C11) and Eq(C12) together. We can do that graphically. Let's just plot (this is the Greek letter "kappa") as a function of k for both Eq(C11) (red lines) and Eq(C12) (blue lines). Any intersections are simultaneous solutions of both equations. The function on the right-hand side of Eq(C11) is very similar to a tan, and Eq(C12) is just the equation for a circle ( , remember?). The radius of the circle depends on how deep the potential well is: large V gives a large circle, small V gives a small circle. Below are plots of Eq(C12) for three different radii, i e for three different values of the constant V. For the smallest V, we get only one intersection with a red curve, i e only one possible energy eigenvalue; for larger V, we get two or three or more. Hence shallow wells may have only one bound state of even parity; if they are a little deeper, they have two or three or more bound states depending on how deep they are (and hence how large the circle gets in the plot below).

> t1:=plot(k*tan(k),k=0..Pi/2,kappa=0..8,scaling=constrained,labels=[`k `,`kappa`]):

t2:=plot(k*tan(k),k=Pi/2..3*Pi/2,kappa=0..8,scaling=constrained):

t3:=plot(k*tan(k),k=3*Pi/2..5*Pi/2,kappa=0..8,scaling=constrained):

t4:=plot(sqrt(4-k^2),k=0..2,color=blue):

t5:=plot(sqrt(16-k^2),k=0..4,color=blue):

t6:=plot(sqrt(49-k^2),k=0..7,color=blue):

display(t1,t2,t3,t4,t5,t6);

Once one has determined the possible energy eigenvalues, one can use any of the Eq(C7a-d) to calculate the ratios C/A and D/A of the normalisation factors. Having expressed C and D in terms of A one would tune A so as to make the overall wave function normalised to 1. Plotting the wave functions of the lowest two bound states (in a well that is deep enough to accommodate at least two bound states), we get something like this:

> s1:=plot(cos(1.2524*x),x=-1..1,tickmarks=[[1=`L/2 `,-1=`-L/2`], [0=`0`]]): s4:=plot(cos(3.5953*x),x=-1..1,colour=green): s2:=plot(0.3131*exp(3.7989*(x+1)),x=-2..-1): s5:=plot(0.8988*exp(1.7532*(x+1)),x=-2..-1,colour=green): s3:=plot(0.3131*exp(-3.7989*(x-1)),x=1..2): s6:=plot(-0.8988*exp(1.7532*(x-1)),x=1..2,colour=green): display(s1,s2,s3,s4,s5,s6);

Now we just have to play the same game for the sine solution in region II. Sine solutions are called odd-parity solutions, because for them (red line) and (green line) are upside-down images of each other, i e . [So, mirror-image means even parity, and upside-down image means odd parity.]

> Eq(C13):=psi[`II,odd`] (x)=B*sin(sqrt(2*(E+V)*m)*x/h_);d7:=plot(sin(x),x=0..4,tickmarks=[[], []]): d8:=plot(sin(x),x=-4..0,tickmarks=[[],[]],colour=green): display(d7,d8);

We require

to be continuous at -L/2 and L/2, which gives Eq(C14a) and Eq(C14b).

> rhs(Eq(C3))=rhs(Eq(C13)): Eq(C14a):=subs(x=-L/2,%); rhs(Eq(C4))=rhs(Eq(C13)): Eq(C14b):=subs(x=L/2,%);

The condition that

must be continuous at -L/2 and L/2 gives Eq(C14c) and Eq(C14d).

> diff(rhs(Eq(C3)),x)=diff(rhs(Eq(C13)),x): Eq(C14c):=subs(x=-L/2,%); diff(rhs(Eq(C4)),x)=diff(rhs(Eq(C13)),x): Eq(C14d):=subs(x=L/2,%);

Again, we see that in each case the two equations are essentially the same except for the coefficients C and D, and that we must have C=-D.

> Eq(C15):=lhs(Eq(C14c))/lhs(Eq(C14a))=rhs(Eq(C14c))/rhs(Eq(C14a));

As before, if we use our abbreviations k and from Eq(C9) and Eq(C10), we can make the above equation much slimmer looking, but this time we get a cot instead of a tan.

> Eq(C16):=kappa=-k*cot(k*L/2);

We have to look for common solutions of Eq(C16) and Eq(C12), and we do this graphically again, by plotting as a function of k. Eq(C12) gives a circle whose radius depends on V (blue lines), as before. Eq(C16) gives something very similar to a cot (red lines). The number of intersections and hence the number of odd-parity bound states again depends on the radius of the circle, which according to Eq(C12) is proportional to . Interestingly, it can now happen that there are no intersections at all for very small radii, which is to say, for very shallow potential wells.

> t7:=plot(x*cot(x),x=0..Pi,kappa=0..8,scaling=constrained,labels=[`x `,`kappa`]):

t8:=plot(-x*cot(x),x=Pi..2*Pi,kappa=0..8,scaling=constrained):

t9:=plot(-x*cot(x),x=2*Pi..3*Pi,kappa=0..8,scaling=constrained):

t10:=plot(sqrt(1-x^2),x=0..2,color=blue):

t11:=plot(sqrt(16-x^2),x=0..4,color=blue):

t12:=plot(sqrt(49-x^2),x=0..7,color=blue):

display(t7,t8,t9,t10,t11,t12);

Below is a plot of what the wave functions of the lowest two odd-parity bound states would look like, provided, of course, that the potential well is deep enough to accommodation at least two.

> t1:=plot(sin(2.6788*x),x=-1..1,tickmarks=[[1=`L/2 `,-1=`-L/2`], [0=`0`]]): t4:=plot(sin(5.226*x),x=-1..1,colour=green): t2:=plot(0.4465*exp(5.3688*(x+1)),x=-2..-1): t5:=plot(0.871*exp(2.9478*(x+1)),x=-2..-1,colour=green): t3:=plot(0.4465*exp(-5.3688*(x-1)),x=1..2): t6:=plot(-0.871*exp(2.9478*(x-1)),x=1..2,colour=green): display(t1,t2,t3,t4,t5,t6);

In summary, the number of bound states in a square well depends on the depth of the well. Very shallow wells may have only one bound state of even parity and none of odd parity.

>


Recommended