Physics PHY-4523, Statistical Mechanics and Thermodynamics, January 2016 Physics PHY-4523 January 2016

Reductionist dogma asserts that once one understands the fundamental laws governing particles (e.g., F=ma or Schroedinger's equation), one can simply integrate those laws to predict the behavior of any system to which they apply. This is a big lie. Consider a gas of molecules confined to a box; to make the example even simpler, imagine that the gas molecules do not interact among themselves, so that the only force encountered is at the walls, where we can assume specular reflection. Thus the motion of any given molecule is trivial: given the initial position and velocity of a molecule, it is easy to calculate its position and velocity at any future time. The difficulty is that there are 10^{23} such molecules in the box; the logbook of initial positions and velocities (which one could never hope to measure) would not fit on all the hard drives of all the computer disks in the world. Even if one could do the calculations, the results would be another set of 6 x 10^{23} numbers, and it is hard to imagine how one could use them in that form.

Statistical mechanics and thermodynamics are concerned with how to extract useful information out of systems with large numbers of degrees of freedom. Of course, everyone is already familiar with the example of the ideal classical gas; the ``solution'' to the problem of how to deal with 6 x 10^{23} degrees of freedom reduces these to just three ``state'' variables: volume, pressure, and temperature. The somewhat awkward title of the course reflects two differing historical approaches. Traditional thermodynamics is a beautiful, self-contained gem based on a small number of axioms governing the state variables; its development in the nineteenth century preceded the modern atomistic understanding of matter. In contrast, statistical mechanics begins with the quantum picture of nature and derives the axioms of thermodynamics, although its application is actually broader.