[Contents]

Copyright © 2003 jsd

Thermodynamics and Differential Forms
John Denker

1  Overview

In thermodynamics, it is common to see equations of the form

dE =  T dS − F·dX              (1)

where E is the energy, T is the temperature, and S is the entropy. In this example, F is the force, and X is a three-component vector specifying the position.

We shall see that the best approach is to interpret the d symbol as the derivative operator. Specifically, dS is the gradient of S. We shall explore various ways of visualizing a gradient... and also ways of visualizing something like TdS that is normally not the gradient of any function. (See reference 1 for details on this.)

This can be formalized using the modern notion of exterior derivative, although if that notion is not familiar to you, don’t worry about it. Everything we need to do can be explained in terms of plain old partial derivatives.

*   Contents

1  Overview
2  Thermodynamic Properties – Real
2.1  Some Examples
2.2  Discussion
2.3  Exterior Derivative versus Differential
2.4  Exterior Derivative versus Gradient
2.5  Exterior Derivative versus Finite Difference; Function of State versus Not
3  Thermodynamic Properties – Unreal
4  Procedure for Extirpating dW and dQ
5  References

2  Thermodynamic Properties – Real

2.1  Some Examples

In thermodynamics, it is common to have a large number of variables that are not all linearly independent. Such a situation is illustrated in figure 1.

pardev3
Figure 1: Contours of Constant Value (for three different variables)

The idea is that the thermodynamic state of the system is described by a point in some abstract D-dimensional space, but we have more than D variables that we are interested in. Figure 1 portrays a two-dimensional space (D=2), with three variables. You can usually choose D of them to form a linearly-independent basis set, but then the rest of them will be linearly dependent, because of various constraints (the equation of state, conservation laws, boundary conditions, or whatever).

Note that figure 1 does not show any axes. This is 100% intentional. There is no red axis, green axis, or blue axis; instead there are contours of constant value for the red variable, the green variable, and the blue variable. For more about the importance of such contours, and the unimportance of axes, see reference 2. The so-called red axis would point in the so-called direction of increasing value of the red variable, but in fact there are many directions in which the red variable increases.

In such a situation, if we stay away from singularities, there is no important distinction between “independent” variables and “dependent” variables. Some people say you are free to choose any set of D nonsingular variables and designate them as your “independent” variables ... but usually that’s not worth the trouble, and – as we shall see shortly – it is more convenient and more logical to forget about “independent” versus “dependent” and treat all variables on the same footing.

Singularities can occur in various ways. A familiar example can be found in the middle of a phase transition, such as an ice/water mixture. In a diagram such as figure 1, a typical symptom would be contour lines running together, i.e. the spacing between lines going to zero somewhere.

See reference 3 for an overview of the laws of thermodynamics. Many of the key results in thermodynamics can be nicely formulated using expressions involving the d operator, such as equation 1 ===

In order to make sense of this, we need to know what kind of things are dE, dS, T dS, et cetera. We would like to be able to visualize them. It turns out that the best way to think about such things is in terms of differential forms in general and one-forms in particular. The details of how to deal with differential forms is explained in reference 4.

But before we get into details, let’s look at some examples.

Consider some gas in a piston. The number of moles of gas remains fixed. We can use the variables S and V to specify where we are in the state space of the system. (Other variables work fine, too, but let’s use those for now.)

Figure 2 shows dV as a function of state. (See reference 3 for what we mean by “function of state”.) Obviously dV is a rather simple one-form. It is in fact a constant everywhere. It denotes a uniform slope up to the right of the diagram. Contours of constant V run vertically in the diagram.

dV
Figure 2: The One-Form dV

Similarly, figure 3 shows dT as a function of state. This, too, is constant everywhere. It indicates a uniform slope up toward the top of the page. Contours of constant T run left-to-right in the diagram.

dT
Figure 3: The One-Form dT (and dE)

Note that the diagram of dT is also a diagram of dE, because for an ideal gas, E is just proportional to T.

dP
Figure 4: The One-Form dP

Things get more interesting in figure 4, which shows dP as a function of state. (We temporarily assume we are dealing with an ideal gas.) Since dP is the gradient of something, we call it a grady one-form, in accordance with the definition given in reference 4. We can see that dP is not a constant. It gets very steep when the temperature is high and/or the gas is squeezed into a small volume. For an ideal gas, the contours of constant P are rays through the origin. For a non-ideal gas, the figure would be qualitatively similar but would differ in details.

The one-forms dS, dT, dV, and dP are all grady one-forms, so you can integrate them globally, without specifying the path along which the integral is taken. When these variables take on the values implied by figure 4, if you integrate them “by eye” you can see that T is large along the top of the diagram, V is large along the right edge, and P is large when the temperature is high and/or the volume is small.

Mathematicians have a name for this d operator, namely the exterior derivative. But if that doesn’t mean anything to you, don’t worry about it. For more information about such things, see reference 5 and reference 6.

2.2  Discussion

Here’s a point that is just a technicality now, but will be important later: These diagrams are meant to portray the one-forms directly. They portray the corresponding scalars T, V, and P only indirectly.

Figure 5 shows the difference between a grady one-form and an ungrady one-form.

dS-TdS
Figure 5: dS is Grady, TdS is Not

As you can see in on the left side of the figure, the quantity dS is grady. If you integrate clockwise around the loop as shown, the net number of upward steps is zero. This is related to the fact that we can assign an unambiguous height (S) to each point in (T,S) space.   In contrast, as you can see on the right side of the diagram, the quantity TdS is not grady. If you integrate clockwise around the loop as shown, there are considerably more upward steps than downward steps. There is no hope of assigning a height “Q” to points in (T,S) space.

Be warned that in the mathematical literature, what we are calling ungrady one-forms are called “inexact” one-forms. The two terms are entirely synonymous. A one-form is called “exact” if and only if it is the gradient of something. We avoid the terms “exact” and “inexact” because they are too easily misunderstood. In particular:

Pedagogical remark and suggestion: The idea of representing one-forms in terms of overlapping “fish scales” is not restricted to drawings. It is possible to arrange napkins or playing-cards in a loop such that each one is tucked below the next in clockwise order. This provides a useful hands-on model of an inexact one-form. Counting “steps up” minus “steps down” along a path is a model of integrating along the path.

2.3  Exterior Derivative versus Differential

You may be wondering what is the relationship between the d operator as seen in equation 1 and the plain old d that appears in the corresponding equation in your grandfather’s thermo book:

dE =  T dS − P dV              (2)

The answer goes like this: Traditionally, dE has been called a “differential” and interpreted as a small change in E resulting from some unspecified small step in state space. It’s hard to think of dE as being a function at all, let alone a function of state, because the step is arbitrary. The magnitude and direction of the step are unspecified.

In contrast, dE is to be interpreted as a machine that says: If you give me a vector that precisely specifies the direction and magnitude of a step in state space, I’ll give you the resulting change in E. If we apply this machine to an uncertain input we will get an uncertain output. But that doesn’t mean that the machine is arbitrary. The machine itself is completely non-arbitrary. The machine is a function of state.

By way of analogy: An ordinary matrix M is a machine that says: If you give me an input vector I, I will give you an output vector O, namely O=(M I). When talking about M, we have several choices:

This analogy is very tight. Indeed, at every point in state space, dE can be represented by a row vector. That’s the same as saying it can be represented by a non-square matrix. In the example we have been considering, the state of the system is assumed to be known as a function of four variables (S and the three components of X) so the gradient will be a matrix one row and four columns.

Operationally, you can (as far as I know) improve every equation in thermodynamics by replacing d with d. That is, we are replacing the idea of “infinitesimal” with the idea of one-form. To say the same thing in slightly different words: we are shifting attention away from the output of the machine (d) onto the machine itself (d). This has several advantages and no known disadvantages. The main advantage is that we have replaced a vague thing with a non-vague thing. The machine dE is a function of state, as are the other machines dP, dS, et cetera. We can draw pictures of them.

Any legitimate equation involving d has a corresponding legitimate equation involving d. Of course, if you start with a bogus equation and replace d with d, it’s still bogus, as discussed in section 3. The formalism of differential forms may make the pre-existing errors more obvious, but you mustn’t blame it for causing the errors. Noticing an error is not the same as causing an error.

The notion of grady versus ungrady is not quite the same in the two formalisms: It makes perfect sense to talk about grady and ungrady one-forms. In contrast, as mentioned in section 2.2, it’s hard to talk about an ungrady differential, since if it’s ungrady, it’s not a differential at all, i.e. it’s not the gradient of anything.

2.4  Exterior Derivative versus Gradient

Let’s forget about thermo for a moment, and let’s forget about one-forms. Let’s talk about plain old vector fields. In particular, imagine pressure as a function of position in (x,y,z) space. The pressure gradient is a vector field. I hope you agree that this vector field is perfectly well defined. There is a perfectly real vector at each (x,y,z) point.

A troublemaker might try to claim “the vector is merely a list of three numbers whose numerical values depend on the choice of basis, so the vector is really uncertain, not unique.” That’s a bogus argument. That’s not how we think of the physics. As explained in reference 7, we think of a physical vector as being more real than its components. The vector is a machine which, given a basis, will tell you the numerical values of its components. The components are non-unique, because they depend on the basis, but we attach physical reality to the vector, not the components.

The pressure gradient is a vector field. As discussed in reference 4, there are two different kinds of vectors, leading to two perfectly good ways of representing the pressure gradient:

If you believe that the field of pointy vectors representing the pressure gradient is unique and well-defined, you ought to believe that the field of one-forms representing the same pressure gradient is equally unique and well-defined.

Given a nice Cartesian metric, in any basis the three numbers representing the pointy vector are numerically equal to the three numbers representing the one-form.

Returning to thermo: Let’s not leave behind all our physical and geometrical intuition when we start doing thermo. Thermo is weird, but it’s not so weird that we have to forget everything we know about vectors.

One-forms are vectors. They are as real as the more-familiar pointy vectors. To say the same thing another way, row vectors are just as real as column vectors.

If you think the pressure gradient dP is real and well-defined when P is a function of (x,y,z) you should think it is just as real and just as well-defined when P is a function of (V,T).

2.5  Exterior Derivative versus Finite Difference; Function of State versus Not

Let us briefly consider taking a finite step (as opposed to an infinitesimal differential). The definition of ΔE is:

ΔE := EA − EB              (3)

where B is the initial state A is the final state. That is, A stands for After and B stands for Before.

Before we can even consider expanding ΔE in terms of PΔV or whatever, we need to decide what kind of thing ΔE is.

Clearly ΔE is a scalar, just like E. It has the same dimensions as E. So far so good.

The problem is, ΔE is not a function of state. It is obviously a function of two states, namely state A and state B.

Let’s see if we can remedy this problem. First we perform a simple change of variable. Rather than using the two points A and B, we will use the single point (A+B)/2 and the direction AB. That is, we can consider Δ(⋯) to be a step centered at (A+B)/2 and oriented in the AB direction. This notion becomes precise if we take the limit as A approaches B. We now have something that is a function of state, the single state (A+B)/2 ... but it is no longer a scalar, since it involves a direction.

At this point we have essentially reinvented the exterior derivative dE. Whereas ΔE was a scalar function of two states, dE is a vector function of a single state.

Let’s review, by looking at some examples. Assuming the system is sufficiently well behaved that it has a well-defined temperature:

name     abscissa → grade of ordinate  type of vector
S  function of  state → scalar  
T  function of  state → scalar  
ΔS  function of  two states → scalar  
dS  function of  state → vector  grady one-form
T dS  function of  state → vector  ungrady one-form
             (4)

You may be accustomed to thinking of dS as the “limit” of ΔS, in the limit of a really small Δ ... but it must be emphasized that that is not the modern approach. You are much better off interpreting the symbols as follows:

These two itemized points are related: Changing the ordinate from scalar to vector is necessary, if we want to change the the abscissa from two states to a single state.

3  Thermodynamic Properties – Unreal

In addition to nice expressions such as equation 1, we all-too-often see dreadful expressions such as

T dS = dQ      (allegedly) 
P dV = dW      (allegedly)
             (5)

As will be explained below, T dS is a perfectly fine one-form, but it is not a grady one-form, and therefore it cannot possibly equal dQ or d(anything), assuming we are talking about uncramped thermodynamics.

Note: Cramped thermodynamics is so severely restricted that it is impossible to describe a heat engine. Specifically, in a cramped situation there cannot be any thermodynamic cycles (or if there are, the area inside the “cycle” is zero). If you wish to write something like equation 5 and intend it to apply to cramped thermodynamics, you must make the restrictions explicit; otherwise it will be highly misleading.

The same goes for P dV and many similar quantities that show up in thermodynamics. They cannot possibly equal d(anything) ... assuming we are talking about uncramped thermodynamics.

Trying to find Q such that T dS would equal dQ is equivalent to trying to find the height of the water in an Escher waterfall, as shown in figure 6. It just can’t be done.

escher-waterfall
Figure 6: Waterfall, by M. C. Escher (1961)

Of course, T dS does exist. You can call it almost anything you like, but you can’t call it dQ or d(anything). If you want to integrate T dS along some path, you must specify the precise path.

Again: P dV makes perfect sense as an ungrady one-form, but trying to write it as dW is tantamount to saying

There is no such thing as a W function, but if it did exist, and if it happened to be differentiable, then its derivative would equal P dV.

What a load of double-talk! Yuuuck!

4  Procedure for Extirpating dW and dQ

Constructive suggestion: If you are reading a book that uses dW and dQ, you can repair it using the following procedures:

5  References

1.
John Denker,
“Visualizing A Field that is Not the Gradient of Any Potential”
//www.av8n.com/physics/non-grady.htm

2.
John Denker,
“Psychrometric Charts, and the Evil of Axes”
//www.av8n.com/physics/axes.htm

3.
John Denker
“The Laws of Thermodynamics”
//www.av8n.com/physics/thermo-laws.htm

4.
John Denker,
“Basic Properties of Differential Forms”
//www.av8n.com/physics/differential-forms.htm

5.
Todd Rowland,
“Exterior Derivative” in Mathworld (Eric W. Weisstein, ed.),
http://mathworld.wolfram.com/ExteriorDerivative.html

6.
Steven S. Gubser,
"Math for physicists: differential forms”,
http://viper.princeton.edu/~ssgubser/courses/Ph106a01/handouts/forms.pdf

7.
John Denker,
“Fundamental Notions of Vectors”
//www.av8n.com/physics/vector-intro.htm

8.
John Denker,
“Partial Derivatives – Pictorial Interpretation”
//www.av8n.com/physics/partial-derivative.htm

9.
Mathworld entry: “Closed Set”
http://mathworld.wolfram.com/ClosedSet.html

10.
Mathworld entry: “Closed Manifold”
http://mathworld.wolfram.com/ClosedManifold.html

(beware: at some points this assumes the existence of a dot product.)

[Contents]

Copyright © 2003 jsd