+ All Categories
Home > Documents > Notes for a course on statistical mechanics.pdf

Notes for a course on statistical mechanics.pdf

Date post: 04-Jun-2018
Category:
Upload: henry212
View: 238 times
Download: 0 times
Share this document with a friend

of 246

Transcript
  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    1/246

    Notes for a Course in

    Statistical Thermodynamics

    Third Edition

    Paul J. Gans

    Department of Chemistry

    New York University

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    2/246

    Notes for a Course in

    Statistical Thermodynamics

    Third Edition

    Paul J. Gans

    Department of Chemistry

    New York University

    Version 6.12

    October 27, 2008

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    3/246

    Credits and Notices

    This material was created by Paul J. Gans as both text and notes for the lectures in his courseG25.2600, Statistical Thermodynamics, given at New York University starting in 2003.

    The first version of the text was written in the summer of 2003 and first used in class in the Fall of

    2003. Since then several versions have been produced. The first versions were titled ComputationalStatistical Thermodynamics Since the fourth version these notes are simply titled Notes for aCourse in Statistical Thermodynamics as the detailed computational aspects have been droppedas impractical in a one semester course. The new title is felt to more accurately reflect the subjectmatter and the order of presentation.

    This version was typeset using the LATEX typesetting system on a SuSE Linux-based computerusing pdfTEX Version 3.141592-1.40.3 (Web2C 7.5.6) and LaTeX2e . This programproduces a pdffile directly from the LATEX source.

    Contents copyright c 2003-2008 by Paul J. Gans. All rights reserved including butnot limited to mechanical, electronic, and photographic reproduction and distribution.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    4/246

    Preface

    This is a work in progress and probably always will be. What you see now is the sixth version ofthese notes. But it is only the third edition as I have adoped the convention that an edition changesonly on a major change in the subject matter.

    Each previous version was used in class and students reactions as well as teachability was noted.This is one reason for the constant change. The other is my enduring, and surely endless, search fortheperfectarrangement of material and the perfectexplanation of ideas.

    The target audience is first-year graduate students in chemistry. In the actual event many are.Others were students in our computational biology program. Still others were in biochemistry.Some few came from Medical Schools, both ours and other major schools in the New York area.And, to their credit, each year some of our senior undergraduate chemistry majors have taken thecourse and have always done very well indeed.

    Of course, as a work in introductory statistical thermodynamics, a great deal of the content is pre-dictable. The main idea is that statistical thermodynamics is no longer only a paper and pencilsubject. Today most of the work in the field is done using computers and the theorist does calcu-lations unimagined just 20 (if not 10) years ago. To that end it might have been best to introduceclassical statistical mechanics first since almost all computations are based on classical models, evenif parts of calculations such as potential energy surfaces often come from quantum calculations.However, the target audience finds classical mechanics even more foreign that statistical mechanics,so that was not done.

    Experience has shown that the mathematical background of many students is not up to the re-quirements of the typical course in statistical thermodynamics. This is true even including studentswhose background is in chemistry or physics. As a result many of those students find the materialdaunting. They are learning the needed mathematics at the same time they are learning the con-

    cepts of statistical thermodynamics and the two together, especially at the start of the course, area bit overwhelming.

    The first version of the course was clearly experimental. It was difficult and did not suit theneeds of most of the students. These notes were at that time titled Computational StatisticalThermodynamics, which did little to reassure the audience.

    To make matters worse, the next version attempted to put more emphasis on models and compu-tations. It included simple computer programs written to show how results could be obtained fromthose models.

    Of course the students had no common computer language. So a simplified dialect of C was inventedfor the description of the algorithms. Indeed at one point the author was even tempted to resur-rect Algol, but luckily resisted the urge. The idea was that each student could translate materialpresented into a computer language he or she didknow.

    Needless to say, in practice the idea failed miserably, having only enthusiasm and little practicalitygoing for it.

    The third version tried teaching classical mechanics first. This led students, even those in chemistryand physics, into unfamiliar byways.

    The fourth version, retitled Statistical Thermodynamics in the grand tradition of such works, re-turned to the time honored standard order of topics with semi-quantum statistical thermodynam-

    iii

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    5/246

    Preface iv

    ics discussed first, some chapters on classical statistical thermodynamics inserted after, and a lastsection on systems with interactions finishing off the course. Further, it was no longer a text, butin fact a reprise of the actual material taught in class.

    The fifth version, had a changed title to more accurately reflect the contents and the order ofpresentation.1

    For this, the sixth version, once again the content was been extensively revised. To be sure, manyremains of earlier versions can be found, but even those have been rewritten. Material has beensomewhat reordered, presentation has been, one hopes, improved, many diagrams sadly lacking inearlier versions have been added, and some new material introduced.

    The aim at this point is to take students as quickly as possible to usable material. Earlier versionssuffered from too much front-loaded theory. The theory is still there, but the initial chapters arenow somewhatad hocwith justification and proofs coming later. One hopes that this will work.

    The author is aware that there is somewhat more material here than can be taught in one semester.While early chapters rely on previous chapters for basic ideas, later chapters are more or less inde-pendent. And in any case sections can be omitted in many places. But most importantly, many of

    the less important points in each chapter can be left for the student to read and assimilate on hisor her own.

    Some of the material, particularly that presented in Appendices is designed for independent study.It is primarily mathematical and is present to aid students unfamiliar with such material. This isregarded as a feature of these notes.

    That said, the author is heavily indebted to those who have written before. A list of works consultedappears at the end of the book. Most of these are older works since it was felt that in manycases the older treatments of the more classical material was not only better, but contained fewermisconceptions and strange ideas than many current works.

    In addition the help of friends and family must also be acknowledged. Professor Mark Tuckermanwas very helpful and I also thank Mr. J. Benjamin Abrams, who suffered through the first version

    of this work for a number of useful conversations.

    My wife Gail put up with this, one son-in-law, Prof. John Crocker thought me passing strange,the other, Victor Mather, kindly acted as though I was perfectly normal, and my children, Susanand Abbey, more used to my hermit-like ways, forbore criticism. And my grandchildren, Josephine,Eva, and Hannah, to whom I dedicate this work, were mercifully unaware of grandpas multiyearobsession.

    To all of these my thanks and apologies, but it isnt over yet, versions seven and eight may neededto clean up all sorts of awkwardnesses, reduce the number of errors, and generally clean things up.

    1The authors preference in a text would be to group the introductory theoretical material together at the start,thus allowing the interconnections among ensembles and their relationships to various dimensionless thermodynamicpotentials to be stressed. But that is hard food for the new student.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    6/246

    Contents

    Preface iii

    A Note on Notation xiii

    1. The Nature of Statistical Thermodynamics 1

    1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.2 Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.3 The Classical Description of a System . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.4 The Quantum Mechanical Description of a System . . . . . . . . . . . . . . . . . . . . 4

    1.5 Boltzmanns Trajectory Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.6 The Gibbs Ensemble Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.7 The Equivalence of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1.8 The Basic Program of Statistical Thermodynamics . . . . . . . . . . . . . . . . . . . 8

    2. The Microcanonical Ensemble 10

    2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.2 Occupation Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.3 The Principle of Democratic Ignorance . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.4 A Subproblem: The Multinomial Coefficient . . . . . . . . . . . . . . . . . . . . . . . 12

    2.5 The Maximization of W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    2.5.1 Finding the Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    2.5.2 Lagranges Method of Undetermined Multipliers . . . . . . . . . . . . . . . . . 14

    2.5.3 The Final Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    2.6 Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    2.7 The Most Simple Spin System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    v

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    7/246

    CONTENTS vi

    2.8 Appendix: The Gamma Function and Stirlings Approximation . . . . . . . . . . . . 23

    3. The Canonical Ensemble 26

    3.1 The Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    3.2 The Most Probable Occupation Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 28

    3.3 The Thermodynamics of the Canonical Ensemble . . . . . . . . . . . . . . . . . . . . 29

    3.3.1 Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    3.3.2 Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    3.3.3 Chemical Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    3.3.4 The Canonical Partition Function and the Identification of Beta . . . . . . . . 31

    3.4 The Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    3.5 Degeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    3.6 A Simple Spin System in an External Field . . . . . . . . . . . . . . . . . . . . . . . . 34

    4. Independent Subsystems in the Canonical Ensemble 38

    4.1 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    4.2 Independent Energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    4.3 Single Particle Canonical Partition Functions . . . . . . . . . . . . . . . . . . . . . . . 39

    4.4 Thermodynamics of Canonical Systems of Independent Subsystems . . . . . . . . . . 41

    5. The Ideal Monatomic Gas 43

    5.1 The Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    5.2 The Evaluation of the Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . 44

    5.3 The Degeneracy Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    5.4 The Thermodynamics of a Monatomic Ideal Gas . . . . . . . . . . . . . . . . . . . . . 47

    5.5 The Electronic Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    5.6 Appendix: The Euler-Maclaurin Summation Formula . . . . . . . . . . . . . . . . . . 54

    6. Ideal Polyatomic Gases 57

    6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    6.2 Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    6.2.1 Vibration in Diatomic Molecules . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    6.2.2 Vibration in Polyatomic Molecules . . . . . . . . . . . . . . . . . . . . . . . . . 63

    6.3 Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    8/246

    CONTENTS vii

    6.3.1 Rotational in Diatomic Molecules . . . . . . . . . . . . . . . . . . . . . . . . . 64

    6.3.2 Evaluation of the Rotational Partition Function . . . . . . . . . . . . . . . . . 65

    6.3.3 Rotation in Polyatomic Molecules . . . . . . . . . . . . . . . . . . . . . . . . . 69

    6.4 The Electronic Partition Function in Polyatomic Molecules . . . . . . . . . . . . . . . 71

    6.5 The Thermodynamics of Polyatomic Molecules . . . . . . . . . . . . . . . . . . . . . . 72

    6.5.1 Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

    6.5.2 Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    6.5.3 Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    6.6 Appendix: Homonuclear Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    6.6.1 Singlets and Triplets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    6.6.2 Rotational Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    7. The Grand Canonical Ensemble 78

    7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    7.2 The Equations for the Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    7.3 Solution of the Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

    7.4 Thermodynamics of the Grand Canonical Ensemble . . . . . . . . . . . . . . . . . . . 81

    7.5 Entropy and Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    7.6 The Relationship to the Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . 83

    7.7 A Direct Consequence of Having Independent Subsystems . . . . . . . . . . . . . . . 83

    7.8 A General Result for Independent Sites . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    8. The Equivalence of Ensembles 89

    8.1 Expansion of the Grand Canonical Partition Function . . . . . . . . . . . . . . . . . . 89

    8.2 Generalized Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    8.3 Transformations Among Ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    8.3.1 The Maximum Term Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    8.3.2 A Cautionary Example: The Isobaric-Isothermal Ensemble . . . . . . . . . . . 93

    8.4 Summary: The Relationship Among Ensembles . . . . . . . . . . . . . . . . . . . . . 95

    8.5 Appendix: Legendre and Massieu Transforms . . . . . . . . . . . . . . . . . . . . . . 97

    8.5.1 Eulers Theorem on Homogeneous Functions . . . . . . . . . . . . . . . . . . . 97

    8.5.2 The Legendre Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    8.5.3 The Massieu Transformations and Dimensionless Equations . . . . . . . . . . . 99

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    9/246

    CONTENTS viii

    9. Simple Quantum Statistics 103

    9.1 Quantum Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

    9.2 Simple Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    9.3 The Ideal Gas Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    10. The Ideal Crystal 108

    10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

    10.2 The Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    10.2.1 Common Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    10.2.2 The Einstein Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    10.2.3 The Debye Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    10.3 The One-Dimensional Crystal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    10.4 Appendix: Behavior of the Debye Function . . . . . . . . . . . . . . . . . . . . . . . 119

    10.5 Appendix: Differentiating Functions Defined by Integrals . . . . . . . . . . . . . . . 121

    11. Simple Lattice Statistics 123

    11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    11.2 Langmuir Adsorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    11.3 The Grand Canonical Site Partition Function . . . . . . . . . . . . . . . . . . . . . . 125

    11.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

    11.4.1 The Langmuir Adsorption Isotherm Again . . . . . . . . . . . . . . . . . . . . 126

    11.4.2 Independent Pairs of Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

    11.4.3 Brunauer-Emmett-Teller Adsorption . . . . . . . . . . . . . . . . . . . . . . . 127

    11.5 Lattice Gas in One Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

    12. Ideal Quantum Gases 134

    12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

    12.2 Weakly Degenerate Ideal Fermi-Dirac Gas . . . . . . . . . . . . . . . . . . . . . . . . 134

    12.3 Strongly Degenerate Ideal Fermi-Dirac Gas . . . . . . . . . . . . . . . . . . . . . . . 140

    12.3.1 Absolute Zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

    12.3.2 The Merely Cold Ideal Fermi-Dirac Gas . . . . . . . . . . . . . . . . . . . . . 141

    12.4 The Weakly Degenerate Bose-Einstein Gas . . . . . . . . . . . . . . . . . . . . . . . 145

    12.5 The Strongly Degenerate Bose-Einstein Gas . . . . . . . . . . . . . . . . . . . . . . . 146

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    10/246

    CONTENTS ix

    12.6 The Photon Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

    12.7 Appendix: Operations with Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    12.7.1 Powers of Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    12.7.2 Reversion of Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

    12.8 Appendix: The Zeta Function and Generalizations . . . . . . . . . . . . . . . . . . . 155

    12.8.1 The Zeta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

    12.8.2 The Dirichlet Eta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

    12.8.3 The Polylogarithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

    13. Classical Mechanics 160

    13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    13.2 Definitions and Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

    13.3 Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

    13.4 Newtons Laws of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

    13.5 Making Mechanics More Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

    13.5.1 Coordinate Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

    13.5.2 The Lagrangian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

    13.6 The Center of Mass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

    13.7 The Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

    13.7.1 Hamiltons Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

    13.7.2 More on Legendre Transformations . . . . . . . . . . . . . . . . . . . . . . . . 176

    13.7.3 Phase Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

    13.7.4 Properties of the Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

    13.7.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

    13.8 Appendix: Standard Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . 182

    13.8.1 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

    13.8.2 Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

    13.8.3 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

    14. Classical Statistical Mechanics 185

    14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

    14.2 Liouvilles Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

    14.2.1 Incompressible flow in Phase Space . . . . . . . . . . . . . . . . . . . . . . . . 190

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    11/246

    CONTENTS x

    14.2.2 Conservation of Extension in Phase . . . . . . . . . . . . . . . . . . . . . . . 190

    14.3 The Virial Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

    15. The Classical Microcanonical Ensemble 193

    15.1 The Specification of a System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

    15.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

    15.3 The Microcanonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

    15.4 The Thermodynamics of the Microcanonical Ensemble . . . . . . . . . . . . . . . . . 200

    15.5 The Number of Microstates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

    15.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

    15.7 Appendix: Volume of ann-Dimensional Hypersphere . . . . . . . . . . . . . . . . . 203

    16. The van der Waals Gas 205

    16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

    16.2 The Approximate Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

    17. Real Gases 210

    17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

    17.2 Virial Coefficients and Configuration Integrals . . . . . . . . . . . . . . . . . . . . . 211

    17.3 Evaluating the Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

    17.4 The Third Virial Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

    17.5 The Lennard-Jones Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

    18. Wide versus Deep 224

    18.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

    18.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

    18.3 But What Does It All Mean? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

    History 227

    Fundamental and Derived Physical Constants 229

    List of Works Consulted 230

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    12/246

    List of Tables

    1.1 The Common Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.2 Total derivatives for some of the common thermodynamic Potentials . . . . . . . . . 3

    5.1 Derivatives for the Evaluation of the Translational Partition Function . . . . . . . . 44

    5.2 for Various Monatomic Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    5.3 t for Various Monatomic Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    5.4 Monatomic Gas Electronic States . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    5.5 The First Few Bernoulli Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    6.1 v for Selected Diatomic Molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    6.2 r for Selected Diatomic Molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    6.3 Derivatives for the Evaluation of the Rotational Partition Function . . . . . . . . . . 66

    6.4 Names for Rigid Rotators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    1 Fundamental and Derived Physical Constants . . . . . . . . . . . . . . . . . . 229

    xi

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    13/246

    List of Figures

    2.1 Spin entropy per spin as a function of fraction upspins . . . . . . . . . . . . . . . . . 22

    2.2 The Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    3.1 Schematic Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    3.2 Energy of a Noninteracting Spin System. . . . . . . . . . . . . . . . . . . . . . . . . . 36

    3.3 Heat Capacity of a Spin System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    6.1 Typical Potential Energy for a Diatomic Molecule . . . . . . . . . . . . . . . . . . . 59

    6.2 Typical Potential Energy for a Diatomic Molecule showing the difference betweenDoandDe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    6.3 Graph ofCV/k vsT /V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

    10.1 Graph ofCV vs T / for an Einstein Crystal . . . . . . . . . . . . . . . . . . . . . . 111

    10.2 Graph ofCV vs T /D for a Debye Crystal . . . . . . . . . . . . . . . . . . . . . . . 114

    11.1 Graph of vs. p for Langmuir Adsorption . . . . . . . . . . . . . . . . . . . . . . . . 125

    11.2 Isotherm for Pairs of Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

    11.3 Plot of the BET Isotherm forr = 200 . . . . . . . . . . . . . . . . . . . . . . . . . . 128

    11.4 Pressure-Area Isotherm for the One-Dimensional Lattice Gas . . . . . . . . . . . . . 133

    18.1 Model Potential Energy Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

    xii

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    14/246

    A Note on Notation

    Statistical thermodynamics is a subject that uses a large number of symbols. It essentially exhauststhe Latin alphabet, even with symbols being denoted by capital or bold face letter. The Greekalphabet, capitals as well as lower case, is also pressed into service, but it is still not enough.

    The reason for this rich set of symbols is that statistical thermodynamics uses the notations of quan-tum mechanics, classical mechanics, and classical thermodynamics in addition to its own symbols.

    Further, what can be printed in a book such as this one (using bold face for instance) does not oftentranslate to symbols that can be easily written on a blackboard or on a written page.

    Since these are notes for a course to be taught in an actual classroom, some of the choices made forsymbols were made to enable decent presentation on a blackboard. 2

    Certain horrible choices had to be made, even though standard symbols were always preferred ifpossible.

    Symbols for both probability and for pressure are needed. Both are usually denoted by the letter

    p. Since these quantities sometimes appear in the same equations, an arbitrary differentiation hashad to be made. Pressure is thus denoted by p (lower case) and probabilities byP(upper case). Ona some occasions there is also need for a symbol for momentum. Luckily context usually providesenough hints to allow disabiguation, so p (lower case) is also used for momentum.

    In physics the potential energy is usually denoted by Vor sometimes byU. In chemistry the volumeof a system is invariably denoted by V while Uis used for the (thermodynamic) internal energy ofa system. Here I have chosen to denote the volume byV, the thermodynamic internal energy by U,and the potential energy by V. The quantum mechanical potential energy operator is denoted byU. On the other hand, to add confusion, the physical energy of a system in general will be denotedbyE.

    In addition a symbol is needed for the mechanical kinetic energy of a system, often denoted by T,

    which is customarily used for the temperature of a system. HereT will be used for the temperatureand T for the kinetic energy.3

    Several problems arise in dealing with classical mechanics as well. In particular the momentum isalmost universally denoted by p. I have kept that notation and trust that context will allow that pto be distinguished from the pressure p. Further, the classical Hamiltonian is invariably H, but thatis the thermodynamic enthalpy. Here Ive used H for both the Hamiltonian and the total energy ofa system. Since only conservative systems are handled here, this is technically permissible, but astylistical horror.

    Another problem is how to represent quantum mechanical operators. The choice the I have madewas to indicate the Hamiltonian operator Has well by H. The potential energy operator then wasrendered as U, which matches its use as the mechanical potential energy. Context should make clearthe intended meaning.

    Yet other instances of possible confusion come with the Helmholtz free energy, denoted by A in thechemical literature. This is not to be confused with the number of systems in an ensembleA. Notethat this latter A is in a calligraphic face. calligraphic face was chosen because it has been usedfor this purpose in other texts.

    2Though those time-sanctified teaching devices are now being replaced by white boards, which the author regardsas educational abominations equivalent to writing on walls with paintbrushes.

    3Curiously, this font is known as Blackboardand while fine in a book, but not so easy to draw on a blackboard.

    xiii

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    15/246

    A Note on Notation xiv

    And a is used in two senses. In the first it is the number of members of an ensemble that are inthe jth state. In the second it is theactivity as used in chemistry. But here there should be noconfusion possible since the difference in usage is quite easily understood from context.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    16/246

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    17/246

    1. The Nature of Statistical

    Thermodynamics

    1.1 Introduction

    This course deals with statistical thermodynamics which is a major branch of a more general fieldcalled statistical mechanics. Both of these deal with the calculation of the macroscopic propertiesof a system from the properties of the microscopic constituents of the system.

    Of the two, statistical mechanics is more general. It deals with both time-dependent properties andequilibrium properties. Statistical thermodynamics is more specialized. It deals with the computa-tion of equilibrium thermodynamic properties.

    These notes deals only with the latter.

    In discussing statistical thermodynamics we will need material not only from equilibrium thermo-dynamics, but also from both classical and quantum mechanics. These are briefly reviewed belowfrom the standpoint of what we will need here in this text.

    1.2 Thermodynamics

    Thermodynamics deals with macroscopic systems at equilibrium. It knows nothing about atoms ormolecules. The thermodynamic state of a system is defined by specifying a relatively few variables. 1

    Typical variables of thermodynamics are internal energy,2 temperature, pressure, and the like. The

    exact number of such variables needed to define the state of a system is given by the Gibbs PhaseRule and depends on the number of independent components of the system and the number ofphases present in the system.

    Thermodynamics is a simple, self-contained system that depends on several axioms (called laws inthermodynamics) and a certain amount of experimental information such as heat capacity data thatcannot itself be calculated from thermodynamics.

    There are five major common thermodynamic potentials. These are the entropy S, the internalenergy U, the enthalpy H, the Gibbs free energy G, and the Helmholtz free energy H.3 Each ofthese has a set ofnatural variablessuch that when a thermodynamic potential is expressed in termsof its natural variables, all other thermodynamic properties can be derived from the potential. Inaddition, a thermodynamic potential points the way to equilibrium since its value is an extremum 4

    when the system is allowed to come to equilibrium with its natural variables held constant.

    As an example, if one knew the internal energy in terms of the entropy, volume, and number ofparticles present, then the temperatureTis given by (U/S)N,V, the pressurep by (U/V)S,N,

    1By defining the state of a system we mean that specifying these few variables is sufficient to fix the values of allother variables of the system.

    2The term internal energymeans the energy contained inside the system and not including any energies derivedfrom the position or motion of the system as a whole.

    3These are not the only possible thermodynamic potentials, but they are the ones that occur most often in actualuse.

    4An extremum is a maximum or a minimum.

    1

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    18/246

    Chapter 1: The Nature of Statistical Thermodynamics 2

    and the chemical potential by (U/N)S,V. Further, the energy reaches a minimum at constantS, V , and N.

    Another example: The natural variables for the entropy areU, V, and N. In an isolated systemthese are held constant5 and the entropy increases for any spontaneous change in such a system.

    Here is a listing of the five common potentials, their natural variables, and the direction in which

    they move for a spontaneous change: Each of these potentials possesses a total derivative. For

    S(U , V, N) 0G(T, p, N) 0A(T , V , N ) 0U(S, V, N) 0H(S,p,N) 0

    Table 1.1: The Common Potentials

    example the total derivative of the entropy in terms of its natural variables6 is:

    dS=

    S

    U

    V,N

    dU+

    S

    V

    U,N

    dV +

    S

    N

    U,V

    dN . (1.2.1)

    or

    dS= 1

    TdU+

    p

    TdV

    TdN (1.2.2)

    In statistical thermodynamics it is often very useful to put the total derivative in dimensionlessterms.7 For the entropy this is:

    dS

    k =

    1

    kTdU+

    p

    kTdV

    kTdN , (1.2.3)

    where k is Boltzmanns constant and not R, the gas constant, since N is the number of particlesand not the number of moles. It is also clear that in some sense the proper thermodynamictemperature variable is not T, butkT. This will often be seen in the following chapters.

    While there is a total derivative for any thermodynamic function for any set of independent variables,it is a simple fact that there can be only onetotal derivative of a given function for a given set ofindependent variables.8

    Table 1.2 on the following page gives a few total derivatives of the common potentials in dimensionlessform is given below.

    1.3 The Classical Description of a System

    We will need to discuss some topics in classical mechanics later in this text. For now let us simplyreview how classical mechanics deals with the motion of a system ofNparticles.

    5Constraints are supplied by the walls of the system. Here they are adiabatic so that no heat can enter, rigid sothat the volume can not change, and impenetrable so that the number of particles cannot change.

    6All thermodynamic functions possess total derivatives in terms of whatever variables are chosen as independent.The natural variables cause the corresponding potential to be special.

    7How to do this will be extensively discussed in Section 8.5.8This, as it should, discounts changes of scale. One can always change units by multiplying through by a constant

    factor. But this does notresult in a truly different equation.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    19/246

    Chapter 1: The Nature of Statistical Thermodynamics 3

    d

    S

    k

    =

    1

    kT

    dU+

    pkT

    dV

    kT

    dN

    d

    G

    kT

    = U d

    1

    kT

    V d

    pkT

    kT

    dN

    d

    A

    kT

    = U d

    1

    kT

    + p

    kT

    dV

    kT

    dN

    1

    kT

    dU =

    dS

    k p

    kT

    dV +

    kT

    dN

    Table 1.2: Total derivatives for some of the common thermodynamic potentials in terms of theirnatural variables

    In principle this is quite simple. Newton pointed out that F = ma, where F is the (vector) force

    on particles of mass m and a is the (vector) acceleration felt by those particles. BothF and a arevectors having 3Ncomponents in three-dimensional space, three for each particle..

    Now recognizing that a = d2x/dt2, wherex is the coordinate vector, we then have the second orderdifferential equation:

    md2x

    dt2 =F , (1.3.1)

    which is really a set of 3Ndifferential equations, one for each component of each particles position.

    If we assume that all the forces in our system are conservative9 then the force can be computed fromthe potential energy Uof the system:10

    F=

    dU(x)

    dx

    , (1.3.2)

    where the notation is shorthand for 3Nsuch derivatives, one for each spatial component of eachparticle.

    If the potential energy is then known, the force can be computed and Equation (1.3.1) can then beintegrated twice to obtain the position of the particles as a function of time. 11

    This integration will require two constants of integration per particle, normally taken as the initialvelocities12 v and the initial positions x.

    We then require six initial conditions per particle or 6N initial conditions overall. In principle thisresults in our knowing the positions and velocities of all the particles at any time t,13 as long as weknow the potential energy function for the Nparticles.

    9

    Meaning not only that there is no friction or similar dissipative force, but more directly means that energy isconserved in the system.

    10Consult H. Goldstein, Classical Mechanics, Addison-Wesley, 1953 or any equivalent book on classical mechanicsfor a proof of this.

    11Leaving aside the technical difficulties of integrating something of the order of 1023 second order differentialequations...

    12If the masses are constant, which they will be as long as we are dealing with atoms and molecules, then the initialmomentump = mv will supply the initial velocity.

    13I cant resist pointing out that the final integrated equations not only give the positions and velocities of all theparticles for any time t after the initial time, it also allows the calculation of the positions and velocities of all theparticles for any time t beforethe initial time. Thus the history of this system is be known for all time.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    20/246

    Chapter 1: The Nature of Statistical Thermodynamics 4

    In principle, if we know all the positions and all the velocities of all the particles, we can computethe value of any mechanical variable at that time.

    1.4 The Quantum Mechanical Description of a System

    To complement the discussion of classical mechanics, we give a quick examination of the equivalentquantum mechanical description of a system ofNidentical particles14 in a fixed volume at equilib-rium.15 This description is given by the systems wave function . This state is usually a mixedstate; a linear combination of all the possible pure quantum mechanical states which satisfy theexternal conditions imposed on the system. Each of these pure states has its own wave function:

    i(q1, q2,...qN) . (1.4.1)

    The wave function i is a function of the coordinates q of the Nparticles in the system and thespecific quantum state i represented by that wave function.

    The overall system is then in the mixed state given by the linear combination these pure states:

    = c11+ c22+ ..., (1.4.2)

    where the cis are constants.

    In more common terminology, a mixed state is one that is degenerate. The pure states arenon-degenerate.

    We expect the system to be in all these states simultaneously until we examine it. 16 When we doexamine the system we know that we will find it in state i with probability17

    Pi= c2ikc

    2k

    . (1.4.3)

    A typical large system has a very large number of possible pure quantum states i. Thus the mixed

    state corresponding to the collection of all the pure states with the same energy will clearly bedegenerate.

    An example of this in even a very small system is the field-free hydrogen atom where the degeneracyof the nth electronic level is n2, where n is the radial quantum number. Only the ground state,n= 1 is non-degenerate.

    Indeed, then= 2 level is quadruply degenerate, the possible pure states being (in traditional chemistnotation) 2s, 2px, 2py, and 2pz. And thecs in Equation (1.4.3) are all 1/4.

    Associated with the system is a set of operators Aj such that to every state iof the system and forevery property of interestAjithere is an operator Aj satisfying

    Aji(q1, q2,...qN) = Ajii(q1, q2,...qN) . (1.4.4)whereAji is the expected valueof the property corresponding to Aj .

    14This description can fairly easily be extended to systems containing groups of non-identical particles, but wewont introduce that additional complication here.

    15By equilibrium we mean that the wave function(s) of the system are not functions of the time.16There is no magic in this. Until we have an idea as to which state the system actually is in, we must treat all

    compatible states as possible.17We here run into one of a number of notational difficulties. We need symbols for the probability and for the

    pressure. Both are usually denoted by the letter p. Ive made the arbitrary choice of denoting the pressure bypand probabilities by P. See the Note on Notation on page xiii.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    21/246

    Chapter 1: The Nature of Statistical Thermodynamics 5

    A typical example is the energy in statei for which the operator is the Hamiltonian operator H

    H = 2

    2m

    Nk=1

    2

    q2k+ U(q1,...qN) , (1.4.5)

    where U(q1,...qN) is the potential energy operator of the system and the corresponding energies

    EisatisfyHi(q1, q2,...qN) = Eii(q1, q2,...qN) . (1.4.6)

    Note that the term U(q1,...qN) contains the implicit constraints on the system such as walls. Forinstance a free particle has U identically zero everywherewhile particles in a box of side L have thepotential

    U =

    0 0 q1, q2,...qN L otherwise (1.4.7)

    1.5 Boltzmanns Trajectory Method

    In the late 19th century Ludwig Boltzmann developed what can be called the trajectory method ofdetermining macroscopic properties from microscopic properties. It is a classical method dependingon Newtons Laws and using its language.

    Consider a system ofN particles contained in a volume V. Each particle has 3 space coordinates,x, y, and z, and three momentum coordinates, px, py, and pz. At any moment in time those sixvariables have specific values. And those values can be represented as a point in a six-dimensionalgraph. Over time this point moves in the graph, tracing out a path called a trajectory. Thetrajectory is clearly continuous since at any instant the variable can only change infinitesimally fromtheir values the instant before.

    Such a graph is called a -space graph.18 It is easy to imagine not one but all Nparticles of thesystem being plotted on the same graph. This produces a swarm of points moving in seeming random

    motion.

    In fact the trajectories are bounded in the space coordinates since all must remain within the volumeof the system. And they are bounded in the momentum coordinates since no particle can have amomentum larger in magnitude than

    2mE, where Eis the total initial energy of the particles.19

    The result is that wed have Ntrajectories being traced out in a bounded region of phase space.

    A more clear picture can be gotten by switching to a slightly different graph. This graph has 6Ncoordinates,20 one for each component of position and momentum for each particle in the system.The entire system is now represented by a single point on this graph. This graph is called phasespaceor sometimes -space.21

    The single point in phase space representing the entire system moves with time in phase space,

    tracing out a system trajectory. At each point in phase space we can compute the value of aphysical property of the system from the known coordinates and momenta at that instant of time.

    To get an average value of any property, all we need do is add up the values of the property at eachpoint along the trajectory and divide by the total number of points.

    18With standing, perhaps, for micro.19And indeed, a particle can only have this magnitude of momentum if all the other particles have zero momentum.20Making it very very hard to even imagine.21Perhaps stands for grandas in large.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    22/246

    Chapter 1: The Nature of Statistical Thermodynamics 6

    Of course this cant be done because there are an uncountable infinity of points along the trajectoryand the time spent at each point is zero. As a result for many years no progress was made in thiscomputation.

    Ludwig Boltzmann got around this by a neat trick.

    Boltzmann conceptually divided phase space up into separate small but finite cells he called mi-

    crostates.22 While finite in size the microstates were to be of a size small enough so that the valueof a property anywhere in it would be essentially constant.23 With the microstates finite in size, thesystem would spend a small, but finite amount of time in each microstate.

    He could then compute averages using the following definition:

    X = limt

    1

    t

    i

    Xiti (1.5.1)

    whereXi is the value of the property in cell i and ti is the amount of time that the system spent incell i. The result is thenX, the average value of the property along the trajectory.Boltzmann then identified this averageXwith the thermodynamic property X.However, it turns out that it is very difficult to compute anything with Equation (1.5.1) using paperand pencil classical mechanical methods. The one major exception is that Boltzmann did, in fact,succeed in computing the properties of an ideal gas using what wed today call the microcanonicalensemble.

    One can also use Equation (1.5.1) in a modified way to compute properties using a computer andnumerical integration. Modifications are needed because in a system with any reasonable N, only avery small portion of a trajectory can be followed and one ends up hoping that it is a representativeportion.

    Quantum mechanics does not change this much. As weve seen the state of the system is a super-position of many degenerate wave functions.

    And quantum mechanics tells us that the system will be in state i with probability Pi given by

    Pi = c2ijc

    2j

    , (1.5.2)

    where theci are the coefficients in Equation (1.4.2) on page 4. Now we can now let Pi play the roleofti/t in the Boltzmann formula, Equation (1.5.1) and calculate system properties from it.

    1.6 The Gibbs Ensemble Method

    Gibbs

    24

    had an insight into this situation. Instead of following a single system along its trajectoryin phase space, he proposed that one create (mentally, naturally) a collection of macroscopicallyidentical systems all having the same fixed external values of N, V, and U, but which would ofcourse be in differentmicroscopic states.

    22Theres that word again! Here it refers to a small volume in phase space and not to a quantum mechanical state.Boltzmann, of course, worked before quantum mechanics was invented.

    23This is called coarse graining. Coarse graining is a process in which we divide a continuous quantity into smallregions or cells. It is quite interesting in that without it all sorts of mathematical difficulties would arise in Boltzmannsmethod.

    24Yes, that Gibbs.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    23/246

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    24/246

    Chapter 1: The Nature of Statistical Thermodynamics 8

    z-components of velocity exactly equal to zero the molecules will then simply bounce back andforth between the walls perpendicular to the x-axis and never hit the other four sides of the box even though there is no energetic reason why they should do this. It is all a matter of the initialconditions.

    On the other hand, a Gibbsian approach will have molecules in all possible initial states, most ofwhich will not have zero y- and z-velocity components. The Boltzmann approach will not.

    Of course this example is a bit contrived but the point is clear. The initial conditions on a systemmay very well restrict it to move in only a portion of the available phase space.

    All we need to do of course is to ensure that any system we want to study does not have suchrestrictions. But sadly, we dont know how to check for this. Indeed, mathematical theory says thatwe cannot in general tell.28

    In practice however, the two methods, Boltzmann and Gibbs, give identical results except in situ-ations (such as the example given above) where the number of attainable sections of phase spaceare notably fewer than expected because of strange initial conditions. Such situations rarely (theauthor is tempted to say never) arise in practice.

    1.8 The Basic Program of Statistical Thermodynamics

    Since Daltons time at the start of the 19th century we have understood that the basic buildingblocks of chemistry are atoms.29 For a century after Dalton it was believed that atoms exactlyobeyed Newtons Laws in their behavior. And so it followed that thermodynamics ought, in someway, to be deducible from Newtons Laws.

    It was also clear that the 1023 or so quantities needed to describe a typical macroscopic system usingNewtons laws were somehow reduced to only three, four, or five properties needed to describe asystem in equilibrium thermodynamics.

    The development of quantum mechanics did not change this view. Again the multitude of individualpieces of information that would be obtained if we could solve the quantum mechanical equationsof an equilibrium system of 1023 particles again must somehow boil down to the few properties ofequilibrium thermodynamics.

    Similarly it was clear that no individual particle in a microscopic system of identical particles isspecial in the sense that it alone determines the macroscopic properties of that system. We knowfrom experience that adding a particle or two to such a system will not sensibly change it. We thenmust conclude thatal lthe particles present contribute in some manner to an averaging process thatresults in the thermodynamic properties of the system.

    The fundamental problem of statistical mechanics is thus to discover how to do the averaging overthe manifold properties of the particles of a system to obtain the thermodynamic properties fromthem.

    The basic program in statistical thermodynamics can be summarized as follows. We attempt tocompute some property Xof a macroscopic system. We start with the independent variables of thesystem (perhaps N, V, and Uas in the example above) and then determine what the microstatesof the system actually are.

    28The famous ergodic hypothesis deals with this. It claims, in essence, that the two approaches are identical.Sadly it was only proven in a weakened form. And it is false in general. Indeed, our example has shown that.

    29In fact, mass was the first physical quantity ever quantized. We can thank Dalton for that.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    25/246

    Chapter 1: The Nature of Statistical Thermodynamics 9

    We then associate with each microstate i, a value Xi of the property Xthat we wish to compute.Then we compute the average value ofXby:

    X =

    iXiPi , (1.8.1)

    wherePi, the probability of finding a particle in state i, is calculated either from classical mechanics

    or quantum mechanics, depending on how we have approached the problem

    We nowassertthat the measured averageX, is, in fact, the macroscopic value ofXwed observein a system with the given values ofN, V , and E.

    Of course, this is checked by experiment. And, it works!

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    26/246

    2. The Microcanonical Ensemble

    2.1 Introduction

    As remarked in Chapter 1, the preferred method for doing theoretical calculations is the GibbsianEnsemble method. We illustrate this here with a simple but not fantastically useful example.1 Weconsider a single-phase, single-component system with fixed number of particles N, a fixed volumeV, and a fixed energy E. It goes without saying that these values are somewhat restricted. Afterall, neitherN norV can be negative and their ratio, the density, cannot be so great as to leave therange of chemical processes and enter the realm of nuclear physics.

    The system in the volume V has energy Ewhich is highly degenerate. The system is in one of the degenerate energy levels of the system.

    We will apply the Gibbsian approach to calculate the probability that the system is in a particulardegenerate state.

    To apply the Gibbsian approach we mentally construct a huge number,A of macroscopic replicasof our system. Each will have the same identical values ofN, V , and E, and each will be in one ofthe one or another of the degenerate energy levels.

    We then mentally freeze the systems in whatever energy level they happen to be in at that moment.We now have a static collection ofAsystems, each in a definite energy level.The usual terminology for a definite system energy level is microstate. It corresponds to the systembeing in a particular non-degenerate quantum state i.

    This collection of systems with the independent variables N, V, and E is called a microcanonicalensemble.

    2.2 Occupation Numbers

    We chose the numberAof ensemble members to satisfy the condition:A >> , (2.2.1)

    where is the degeneracy of the quantum mechanical degenerate state corresponding to the imposedexternal conditions.

    This is done to ensure that there is a far large number of systems than degenerate levels. This waywe can expect that usually there will be at least several systems in any given microstate. Indeed,what we hope is that (1) all possible microstates of the system are sampled by this process and (2)that there will be many members of the ensemble in each of those possible microstates.

    We let aj be the number of systems in the ensemble that are in microstate j . Theseas are called

    1So why do we bother? Because we will introduce a mathematical technique here that will be very useful to us inlater work. And it is easier to first see it operate in a situation that is physically simple.

    10

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    27/246

    Chapter 2: The Microcanonical Ensemble 11

    theoccupation numbersof the microstates of the ensemble. It is immediately clear that:

    j=1

    aj = A . (2.2.2)

    If we knew the aj s, we could determine the macroscopic propertiesX of the system because weassume that, as stated before (see Section 1.6 on page 6), that we can associate a property Xjwith each microstate j . And, most importantly, it is clear that the probability of finding any givenensemble member in state j is just aj/A. So given the aj s we can work out the thermodynamics.However, we dont yet have enough information to find the aj s. There are many different sets ofajs that will satisfy Equation (2.2.2). For instance, if there were three systems in an ensemblethat had two possible microstates, thenA = 3 and = 2. We can describe this toy ensemble bythree digits, the first being the microstate of the first system, the second being the microstate of thesecond, etc.

    The possible arrangements of the systems among the microstates are then:

    111, 112, 121, 122, 211, 212, 221, 222

    All of these have a1+ a2 = 3.

    Note that there is only one way to have a1 = 3 while there are three ways to have a1 = 2. Soeven though weve placed the systems evenly among the possible microstates, the various sets ofoccupation numbers aj are not equally likely.

    Of course, weve assumed that all states are equally likely to be occupied. But is that even true?

    2.3 The Principle of Democratic Ignorance

    The problem before us is:

    Are all states equally likely to be occupied?

    We do not know which, if any, of the degenerate microstates the system prefers to be in. Worse,we have no obvious way of judging if any group of microstates is more or less likely to be occupiedthan any other.

    So we are at a loss as to how to proceed.

    What we need is some sort of rule that will help us. And there is such a general rule in science thatcovers situations like this. Well call it the rule of democratic ignorance. It is essentially this:

    When there is a situation with a number of possibilities, and when there is absolutely noreason to prefer one possibility over any other, then all possibilities have to consideredto be equally likely.

    Of course this rule can not be proven. But it is instructive to consider possible alternatives:

    1. The Rule of Primogeniture: Whatever state is numbered 1 is the most likely; number 2is second most likely, and so on.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    28/246

    Chapter 2: The Microcanonical Ensemble 12

    2. The Rule of Authority: The most probable state is the one the author says is the mostprobable.

    3. The Rule of Mystery: There is a most probable state but we can never know what it is.

    The first alternative is clearly silly. Numbering the states is up to the person who does the numbering.

    It is a pure accident which state is numbered first and cannot reflect any physical reality. Thus wecan safely ignore Rule 1.

    Alternative 2 is equally silly. Simple moral authority only substitutes someone elses ignorance foryours.

    Alternative 3 is an abdication of responsibility. It is throwing up our hands and saying that wecannot solve this problem and should go on to do something else as a career, like perhaps being amovie star.2

    It is doubtful that anyone can suggest a rule other than the Rule of Democratic Ignorance. Certainly,nobody yet has been able to do that. So in the absence of anything better, we are left with the Ruleof Democratic Ignorance.

    Applying the Rule of Democratic Ignorance to the situation at hand, we must conclude that: Anyparticular system has an equal chance of being in any of thedifferent microstates. This is equivalentto saying thatallA different ways of arranging theA systems among the states are equally likely.This doesnotmean that each of the setsofas are equally likely. For instance in the example withthree systems and two states above there are eight equally likely arrangements, three of them leadto the set{2,1} but only one way leads to the set{3,0}.It turns out that of all possible arrangements, one is far more likely than any other. That cant beguessed beforehand, but it is an interesting consequence of the fact that N is huge.

    Accepting this for now (we will retroactively prove it later) our question: What values can the asbe expected to havecan now be changed to:

    What set ofas has the greatest number of ways of occurring?

    2.4 A Subproblem: The Multinomial Coefficient

    Listing the number of ways in which three systems can be arranged so that, for example, there aretwo of them in the first state and one in the second could be simplified if we had a way to computehow many such ways there are.

    In particular we are going to be very interested in knowing how many ways there are to get anyparticular given set ofas. Put more formally:

    How many ways are there of arranging A systems so that there area1in the first quantumstate,a2 in the second, etc.

    Or, in slightly different language:

    2Actually, one might like that...

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    29/246

    Chapter 2: The Microcanonical Ensemble 13

    How many ways are there of arrangingA objects into piles such that there are a1 inthe first pile, a2 in the second pile, etc.

    If we let the number of ways of making such piles be W, then it can be shown that

    W(a1, a2,...,a) = A!

    a1!a2!...a!

    = A!jaj !

    . (2.4.1)

    Heres an example:

    Example 2.1:

    How many arrangements are there of three systems that result in the occupation numbers a1 = 2anda2 = 1?

    W(2, 1) = 3!

    2!1!=

    6

    2= 3 . (2.4.2)

    The quantity W is known as the multinomial coefficient. It is a generalization of the binomialcoefficient to multinomials. It arises naturally in algebra in the expansion of multinomials:

    (x1+ x2+ ... + x)A =

    Aa1=1

    Aa2=1

    A

    a=1

    W(a1, a2,...,a)xa11 x

    a22 ...x

    a . (2.4.3)

    This gives us a quick and useful side result. If all of the xs are set to 1 we get:

    A =

    [a1=1,...,A],[a2=1,...,A],[...]

    W(a1, a2,...,a) . (2.4.4)

    Thus the total number of distinct ways of arrangingAobjects into piles is A.This is a very very very large number.

    If, for instance, were only 100 andAwere only 1000 (both much smaller than the numbers wedexpect to run into), there would be 102000 different ways to arrange the 1000 objects among the 100piles.

    2.5 The Maximization of W

    We want to find the set ofas that has the greatest number of ways of occurring. We can find that

    by finding the set ofas that maximize W.3

    2.5.1 Finding the Maximum

    IfWdepended only on one variable, say a, we would simply form dW/da, set the result to zero, andsolve to find the value ofa that maximizes W

    3In truth this is not exactly what we are looking for. We should be looking for the most probableset ofas. Weassumethat this is the set with the greatest number of ways of occurring. IfWis a symmetric function with a singlemaximum, the maximum and the most probable will be the same.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    30/246

    Chapter 2: The Microcanonical Ensemble 14

    But here we have different as.

    The as and Wdefine an + 1 dimensional space. The as are the independent variables andW is thedependent variable.

    At a local maximum, the surface definingWin this space is flat. That is, any small variation in theas will cause no change in W.

    Now, if we make small changes da in the as the corresponding change in W is:

    dW(a1, a2,...,a) =

    W

    a1

    da1+

    W

    a2

    da2+ ... +

    W

    a

    da , (2.5.1)

    and so, it seems that all we have to do is to set dW= 0 and solve for the as. This will maximizeW.

    Recall that:

    W(a1, a2,...,a) = A!jaj !

    . (2.5.2)

    Because of the product in the denominator, differentiating this is very messy. 4 In order to neaten

    up the math, we introduce a trick worth remembering: we take logarithms:

    ln W = ln A! j=1

    ln aj ! , (2.5.3)

    and maximize ln W instead:

    d ln W =

    ln W

    a1

    da1+

    ln W

    a2

    da2+ ... +

    ln W

    a

    da . (2.5.4)

    This works because of a simple mathematical fact: in general iff(x) has a maximum at x = x,then ln f(x) also has a maximum at x = x.

    To see this note that the maximum in ln f(x) occurs at

    d ln f(x)

    dx =

    1

    f(x)

    df(x)

    dx = 0 , (2.5.5)

    and so, as long as f(x) is never zero, we have our desired result. Here, of course f(x) isW, andW,by definition, is never less than 1 and so cant ever be zero.

    2.5.2 Lagranges Method of Undetermined Multipliers

    What we have after setting Equation (2.5.5) to zero is:

    d ln W =

    ln Wa1

    da1+

    ln W

    a2

    da2+ ... +

    ln W

    a

    da = 0 (2.5.6)

    We would like to argue as follows: The as are independent quantities. So their small variations,da, are also independent. This means that we can pick whatever value we wish for them and theequation above must still be true.

    But this can be true if and only if the coefficients of the das are identically zero. Why? Imaginewe set each of the as to be 1 10100, which is certainly small enough to be a da, and by accident

    4Not to mention the factorials which cant simply be differentiated at all!

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    31/246

    Chapter 2: The Microcanonical Ensemble 15

    the terms all added up to be zero. Now we change several of the das to be 2 10100, still smallenough to be a da, and add them up. Can we still expect the sum to be zero? What if we choseanother set of values for the das?

    The only way the equation could be always true would be if and only if each of the coefficients oftheas were separately equal to zero.

    Then we could simply set ln W

    ai

    = 0 for all i , (2.5.7)

    and solve the separate equations trivially.

    But we can notdo that!

    Theas are not independent! The ai must satisfy

    j=1

    aj = A . (2.5.8)

    So if - 1 of the as are known, we can find the remaining one by using Equation (2.5.8).

    This is a constraint on the solution. Lagrange dealt with the problem of finding constrained maximaand came up with an interesting answer.5 First we note that because of the constraint equationEquation (2.5.8) we also have:

    j=1

    daj = 0 , (2.5.9)

    becauseAis a constant.Pay attention now. Since this equation is identically zero, it can be added to the right-hand side ofany other equation without changing that equation in any way. In fact, anymultiple of this equationcan be added to the any side6 of any other equation without changing its value. So we will, following

    Lagrange, add

    j=1 daj to each term of

    ln W

    a1

    da1+

    ln W

    a2

    da2+ . . . +

    ln W

    a

    da=

    j=1

    ln W

    aj

    daj = 0 ((2.5.6))

    to get:j=1

    ln W

    aj

    daj = 0 . (2.5.10)

    Of course lambda is a constant that can be set to any value we wish (even zero) without changingthe total value of Equation (2.5.10). So we can chose it to be:

    =

    ln Wa1

    , (2.5.11)

    which fixes to be whatever the value of that derivative has.

    5To a non-mathematician it is more than interesting, it is quite amazing. Back when I first saw it I had a momentaryflash of how amazing things can be done in mathematics if one opened ones mind.

    6Or subtracted for that matter!

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    32/246

    Chapter 2: The Microcanonical Ensemble 16

    Heres the clever part.7 That choice of exactly cancels the first term of Equation (2.5.10) on theprevious page, so now that equation reads:

    j=2

    ln W

    aj

    daj = 0 , (2.5.12)

    witha1 is no longer present in the equation. Now there are only 1 as left. And since there are 1 independent variables, all the remaining aj s can be taken as independent.Now we can correctly argue that each of the remaining daj s are independent and the only way thatEquation (2.5.12) can be true is that if each of the coefficients of the aj s is itself identically zero.Thus we conclude that

    ln W

    aj

    = j = 2,... , (2.5.13)

    so that each of the derivatives is not only constant, but they are all the sameconstant, !

    Further we notice that Equation (2.5.11) on the previous page fits the scheme of Equation (2.5.13)as well, so we can regard that Equation as holding for any j from 1 to .

    All that remains in order to solve our problem of finding what set ofas gives the largest value toW is to calculate ln W/aj . To do this we look at the expression for ln W:

    ln W = ln A! j=1

    ln aj ! , ((2.5.3))

    and see that all we have to do is differentiate it with respect to aj and evaluate the resultingln W/aj . For j = 1 that will give us and the value of all the other as.

    Doing all this is the subject of the next section.

    2.5.3 The Final Details

    We already know that the values of the aj that maximizeWare those which satisfyln W

    aj

    = j = 1,... , ((2.5.13))

    whereW is given by:

    ln W = ln A! j=1

    ln aj ! . ((2.5.3))

    We see that we will have to differentiate a factorial. There is no simple way to do this. The factorialfunction we know is based upon integers and so is not a continuous function and simply cannot be

    differentiated.

    However, there is a generalization of the factorial function known as the gamma function which iscontinuous. This function is discussed in the Appendix to this chapter on page 23.

    The gamma function gives rise to a well-known approximation to the factorial function called Stir-lings Approximation.8

    7Lagrange was a clever fellow!8Stirlings Approximation is discussed in detail in Section 2.8.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    33/246

    Chapter 2: The Microcanonical Ensemble 17

    In its simplest form Stirlings Approximation is:

    ln n! = n ln n n . (2.5.14)

    Substituting that transforms Equation (2.5.3) on page 14 to:

    ln W = ln A! j=1

    [ajln aj aj ] . (2.5.15)

    Differentiating gives: ln W

    aj

    = [ln aj+ 1 1] = ln aj. (2.5.16)

    Thusln aj = , (2.5.17)

    oraj =e

    j = 1, ... , , (2.5.18)

    where an asterisk is used on aj to indicate that this is the value ofaj that maximizes W and not

    just any oldaj .

    Since theaj must sum toA, it is clear that9

    aj =A

    . (2.5.19)

    We can now calculateWmax, given by:

    ln Wmax= ln A! j=1

    ln aj ! , (2.5.20)

    which is:

    ln Wmax= ln A! j=1

    ln

    A

    ! . (2.5.21)

    There are equal terms on the right, so this can be written as:

    ln Wmax= A ln A A A

    ln

    A

    A

    , (2.5.22)

    which can easily be shown to reduce to:

    ln Wmax= A ln , (2.5.23)

    or, what is the same thing:

    Wmax= A . (2.5.24)

    Those of you with good memories will recall that the total number of ways of arranging ourAensemble members in microstates is precisely A.

    What can this mean? Can the most probable set ofas account forallthe different ways of arrangingthe ensemble members?

    9Actually, one could have guessed this result once it was realized that all theaj s had to have the same value. Butthe Lagrange multiplier technique is so important statistical thermodynamics that it was thought appropriate to gothrough the entire procedure.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    34/246

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    35/246

    Chapter 2: The Microcanonical Ensemble 19

    longer equal. Many of thoseas pertain to systems with molecules in the top half of their volumesand those are now zero. So instead of having a situation where all the as are equal, we have manyof them zero.

    Weve already shown that this set will not maximizeW, because we know what values the as musthave to do that.

    If we let our ensemble of systems evolve with time, we know that systems will move from microstateto microstate. While the total energy of the system will remain constant the systems will redistributethemselves over the states until they reach the most probable distribution.11

    In other words, W increases as the system goes to equilibrium. In fact, once at equilibrium weexpect almost never to see anything other than the most probable distribution.12

    The result of this thought experiment13 is the realization thatWin a non-equilibrium ensemble willincrease to a maximum as the system goes to equilibrium. This behavior is just what wed expectfrom the entropy.

    Since there is only one property indicating the direction of equilibrium for a given set of independentvariables, we conclude thatW, and hence must be connected to the entropy.

    Let us now look at the extensive aspect of the entropy.

    Put another way, being extensive means that if we take a system and duplicate it and then put thetwo duplicates together to form a single new system, the entropy of the new system is exactly thesum of the entropies of the two original systems.14

    Let us look at two systems, A and B . They have entropies SA and SB and a total entropy of

    S= SA+ SB. (2.6.2)

    The degeneracies of the systems are A and B. The total degeneracy is

    = AB. (2.6.3)

    Why is this true? Because if systemA is in microstatej that doesnt affect the microstate of systemB at all. So for each microstate ofA there are B microstates ofB . Hence Equation (2.6.3).

    The s dont behave like entropies at all. Entropies add, s multiply. But logarithmsof s add!

    We can try to see ifS(N , V , E ) = k ln(N , V , E )

    works. It certainly works for the maximum property, since Sis a maximum exactly where or Wis a maximum. And weve just seen that ln has the proper additivity property

    But we quickly realize that there are a couple of minor problems. We first need to establish a zeropoint for the entropy. We know thatS= 0 occurs when there is only one allowable state for the

    entire system. And surely enough, that means that for such a system must be 1, and ln 1 = 0. Sowe are all right there.

    11Let me stress that this is NOT because the molecules know which distribution is the most probable. Indeed,molecules are incredibly stupid. They end up in the most probable distribution because in factmostof the possiblearrangements of the occupation numbers of the microstates are in the most probable distribution!

    12Be careful here. We will discuss this more fully in a later chapter. Suffice it to say that if the number of particlesin the system is small, the chance of finding the system in a state other than the most probable increases greatly.And even if the number is very large, there is a very very small, but non-zero chance that such a state will occur.

    13Arent thought experiments neat? Easy to set up and easy to clean up afterwards. The doing of them is stillsometimes hard, though.

    14Entropy is not the only extensive thermodynamic function. All the direction-of-equilibrium functions are extensive.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    36/246

    Chapter 2: The Microcanonical Ensemble 20

    And there is a problem with the scale of the entropy. We measure entropies today in Joules perKelvin. Back not too long ago we measured entropies in calories per Kelvin. All that changed whenwe changed units was the scale. To set the scale of the entropy we use an appropriate factor k,which also takes care of the units of entropy since logarithms dont have units.

    So, following Boltzmann, who first discovered this relationship, we assume the following ratherarbitrary, but properly behaved definition of entropy:

    S(N , V , E ) = k ln(N , V , E ) , (2.6.4)

    wherek is a constant, known appropriately today as Boltzmanns constant.15

    A quick digression. We will study several more ensembles in succeeding chapters. In each of theseit will be shown that the entropy is given by the formula:

    S= kj=1

    Piln Pi . (2.6.5)

    where the sum is taken over all allowed states and Pi is the probability that a randomly chosensystem in the ensemble will be found in state i. HerePi= 1/. Using this in Equation (2.6.5) gives:

    S= kj=1

    Piln Pi= kj=1

    1

    ln

    1

    =

    k

    l n =k ln (2.6.6)

    which shows that this formula applies to the microcanonical ensemble as well.

    When a thermodynamic quantity is expressed in terms of its natural values, it not only points in thedirection of equilibrium, but it has another property as well. If one has an explicit formula for theproperty in terms of its natural variables, then all other thermodynamic properties can be obtainedfrom that formula.

    In classical thermodynamics we know that if we have a formula for the entropy in terms ofN, V,

    andE, we can calculate all other thermodynamic functions from it. We can see how to do that fromthe total differential:

    dS=

    S

    E

    dE+

    S

    V

    dV +

    S

    N

    dN , (2.6.7)

    which, if we evaluate the differentials, gives us:

    dS= 1

    TdE+

    p

    TdV

    TdN , (2.6.8)

    showing that the temperature, pressure, and chemical potential are also known.

    Similarly, if we had an equation for (N , V , E ) in terms ofN, V, and Eas independent variables,we could differentiate its logarithm to get:

    d ln = 1kT

    dE+ pkT

    dV kT

    dN . (2.6.9)

    Of course we have not (yet) developed such a formula. What we have here is only a formalism, agroup of mathematical equations that tell us how we could compute things of interest if we actuallyhad a concrete formula for in terms ofN, V , and E.

    15And it is carved onto his tombstone, possibly the only equation in human history to be so remembered.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    37/246

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    38/246

    Chapter 2: The Microcanonical Ensemble 22

    there too. The result is then:

    (0, M , N ) =

    M

    N

    =

    M!

    N!(M N)!. (2.7.1)

    We can significantly simplify this by applying Stirlings approximation:

    ln =Mln M M Nln N+ N (M N)ln(M N) . (2.7.2)Then with = N/M:

    1

    M ln = ln M N

    M ln N N

    M

    1 NM

    ln(M N) + 1 N

    M

    = ln M ln N (1 ) ln M (1 ) ln(1 ) , (2.7.3)which, after a bit more manipulation becomes:

    S(0, M , N )/k= ln (0, M , N ) = M{ ln + (1 ) ln(1 )} , (2.7.4)which, since is a mole fraction of up spins in a two-component system, Equation (2.7.4) gives

    exactly the thermodynamic entropy of mixing of two components non-interacting components, upspins and down spins.

    Figure 2.1: Spin entropy per spin as a function of fraction upspins

    Computation of the chemical potential /kT and the pressure p/kT is left as an exercise for thereader.

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    39/246

    Chapter 2: The Microcanonical Ensemble 23

    2.8 Appendix: The Gamma Function and Stirlings Ap-

    proximation

    Stirlings approximation is a useful equation that allows writing factorials in terms of continuousfunctions. The approximation is:

    ln n! = n ln n n . (2.8.1)These are the first few terms of an asymptotic expansion for ln n!. An asymptotic expansion is aseries expansion that does not converge to the actual value of the function but also has the propertythat the ratio of its value and the correct value goes to one as the argument of the function increasesin size.

    The factorial n! is defined only for the non-negative integers. The factorial of zero is defined as 1.They are a special case of thegamma function (z), where zis a complex variable whose real partis always greater than 0. The gamma function is given by:

    (z) =

    0

    tz1etdt real part ofz >0 . (2.8.2)

    Figure 2.2: The Gamma Function

    It is simple to show that the gamma function obeys the relationship:

    (z+ 1) =z(z) . (2.8.3)

    To do this we need only integrate

    (z+ 1) =

    0

    tzetdt , (2.8.4)

    by parts. Using the formula

    udv = uv

    vdu we can choose tz = u and etdt= dv . This gives

    usv =

    et anddv = ztz1dt. So

    (z+ 1) = tzet

    0

    + z

    0

    tz1etdt . (2.8.5)

    The first term is zero at both limits and the second gives us z (z)

    We can use Equation (2.8.3) to show that the gamma function acts like the factorial for integer zby noting that, for example:

    (4) = 3(3) = 3 2(2) = 3 2 1(1) , (2.8.6)

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    40/246

  • 8/14/2019 Notes for a course on statistical mechanics.pdf

    41/246

    Chapter 2: The Microcanonical Ensemble 25

    A little numerical exploration shows21 that this integrand is peaked at z = t, so we expand theexponentz ln t t around z = t in a Taylor series. For this we will need the derivatives:

    f(t) = z ln t t

    f(t) =z

    t 1 (2.8.15)

    f(t) = zt2

    f(t) =2z

    t3

    so the expansion is:

    z ln t t z ln z z (t z)2

    2z + (2.8.16)

    where terms in (t z)3 and higher have been neglected.We now have, for Equation (2.8.14) on the previous page

    (z+ 1)

    0 e

    z ln zz((tz)2)/2z

    dt (2.8.17)

    ez ln zz

    0

    e((tz)2)/2zdt (2.8.18)

    If we now let (t z)2/2z = u2, we get, after some manipulation

    (z+ 1) (2z)1/2zzez (z/2)1/2

    eu2

    du (2.8.19)

    Becausez is assumed to be large and because the integrand falls of rapidly around its peak at u = 0,we can extend the lower limit towithout introducing too much more error. We do this becausewe can then do the resulting integral22 which is:

    eu2

    du= 2

    0

    eu2

    du= 1/22

    (2.8.20)

    We then finally get our asymptotic approximation:

    (z+ 1) =z! (2


Recommended