Copyright © 2003 jsd
In thermodynamics, it is common to see equations of the form
dE = T dS − F·dX (1) 
where E is the energy, T is the temperature, and S is the entropy. In this example, F is the force, and X is a threecomponent vector specifying the position.
We shall see that the best approach is to interpret the d symbol as the derivative operator. Specifically, dS is the gradient of S. We shall explore various ways of visualizing a gradient... and also ways of visualizing something like TdS that is normally not the gradient of any function. (See reference 1 for details on this.)
This can be formalized using the modern notion of exterior derivative, although if that notion is not familiar to you, don’t worry about it. Everything we need to do can be explained in terms of plain old partial derivatives.
In thermodynamics, it is common to have a large number of variables that are not all linearly independent. Such a situation is illustrated in figure 1.
The idea is that the thermodynamic state of the system is described by a point in some abstract Ddimensional space, but we have more than D variables that we are interested in. Figure 1 portrays a twodimensional space (D=2), with three variables. You can usually choose D of them to form a linearlyindependent basis set, but then the rest of them will be linearly dependent, because of various constraints (the equation of state, conservation laws, boundary conditions, or whatever).
Note that figure 1 does not show any axes. This is 100% intentional. There is no red axis, green axis, or blue axis; instead there are contours of constant value for the red variable, the green variable, and the blue variable. For more about the importance of such contours, and the unimportance of axes, see reference 2. The socalled red axis would point in the socalled direction of increasing value of the red variable, but in fact there are many directions in which the red variable increases.
In such a situation, if we stay away from singularities, there is no important distinction between “independent” variables and “dependent” variables. Some people say you are free to choose any set of D nonsingular variables and designate them as your “independent” variables ... but usually that’s not worth the trouble, and – as we shall see shortly – it is more convenient and more logical to forget about “independent” versus “dependent” and treat all variables on the same footing.
Singularities can occur in various ways. A familiar example can be found in the middle of a phase transition, such as an ice/water mixture. In a diagram such as figure 1, a typical symptom would be contour lines running together, i.e. the spacing between lines going to zero somewhere.
See reference 3 for an overview of the laws of thermodynamics. Many of the key results in thermodynamics can be nicely formulated using expressions involving the d operator, such as equation 1 ===
In order to make sense of this, we need to know what kind of things are dE, dS, T dS, et cetera. We would like to be able to visualize them. It turns out that the best way to think about such things is in terms of differential forms in general and oneforms in particular. The details of how to deal with differential forms is explained in section 5.
But before we get into details, let’s look at some examples.
Consider some gas in a piston. The number of moles of gas remains fixed. We can use the variables S and V to specify where we are in the state space of the system. (Other variables work fine, too, but let’s use those for now.)
Figure 2 shows dV as a function of state. (See reference 3 for what we mean by “function of state”.) Obviously dV is a rather simple oneform. It is in fact a constant everywhere. It denotes a uniform slope up to the right of the diagram. Contours of constant V run vertically in the diagram.
Similarly, figure 3 shows dT as a function of state. This, too, is constant everywhere. It indicates a uniform slope up toward the top of the page. Contours of constant T run lefttoright in the diagram.
Note that the diagram of dT is also a diagram of dE, because for an ideal gas, E is just proportional to T.
Things get more interesting in figure 4, which shows dP as a function of state. (We temporarily assume we are dealing with an ideal gas.) Since dP is the gradient of something, we call it a grady oneform, in accordance with the definition given in item 20. We can see that dP is not a constant. It gets very steep when the temperature is high and/or the gas is squeezed into a small volume. For an ideal gas, the contours of constant P are rays through the origin. For a nonideal gas, the figure would be qualitatively similar but would differ in details.
The oneforms dS, dT, dV, and dP are all grady oneforms, so you can integrate them globally, without specifying the path along which the integral is taken. When these variables take on the values implied by figure 4, if you integrate them “by eye” you can see that T is large along the top of the diagram, V is large along the right edge, and P is large when the temperature is high and/or the volume is small.
Mathematicians have a name for this d operator, namely the exterior derivative. But if that doesn’t mean anything to you, don’t worry about it. For more information about such things, see reference 4 and reference 5.
Here’s a point that is just a technicality now, but will be important later: These diagrams are meant to portray the oneforms directly. They portray the corresponding scalars T, V, and P only indirectly.
Figure 5 shows the difference between a grady oneform and an ungrady oneform.
As you can see in on the left side of the figure, the quantity dS is grady. If you integrate clockwise around the loop as shown, the net number of upward steps is zero. This is related to the fact that we can assign an unambiguous height (S) to each point in (T,S) space.  In contrast, as you can see on the right side of the diagram, the quantity TdS is not grady. If you integrate clockwise around the loop as shown, there are considerably more upward steps than downward steps. There is no hope of assigning a height “Q” to points in (T,S) space. 
Be warned that in the mathematical literature, what we are calling ungrady oneforms are called “inexact” oneforms. The two terms are entirely synonymous. A oneform is called “exact” if and only if it is the gradient of something. We avoid the terms “exact” and “inexact” because they are too easily misunderstood. In particular:
 In this context, exact is not even remotely the same as accurate.
 In this context, inexact is not even remotely the same as inaccurate.
 In this context, inexact does not mean “plus or minus something”.
 In this context, exact just means grady. An exact oneform is the gradient of some potential.
Pedagogical remark and suggestion: The idea of representing oneforms in terms of overlapping “fish scales” is not restricted to drawings. It is possible to arrange napkins or playingcards in a loop such that each one is tucked below the next in clockwise order. This provides a useful handson model of an inexact oneform. Counting “steps up” minus “steps down” along a path is a model of integrating along the path.
You may be wondering what is the relationship between the d operator as seen in equation 1 and the plain old d that appears in the corresponding equation in your grandfather’s thermo book:
dE = T dS − P dV (2) 
The answer goes like this: Traditionally, dE has been called a “differential” and interpreted as a small change in E resulting from some unspecified small step in state space. It’s hard to think of dE as being a function at all, let alone a function of state, because the step is arbitrary. The magnitude and direction of the step are unspecified.
In contrast, dE is to be interpreted as a machine that says: If you give me a vector that precisely specifies the direction and magnitude of a step in state space, I’ll give you the resulting change in E. If we apply this machine to an uncertain input we will get an uncertain output. But that doesn’t mean that the machine is arbitrary. The machine itself is completely nonarbitrary. The machine is a function of state.
By way of analogy: An ordinary matrix M is a machine that says: If you give me an input vector I, I will give you an output vector O, namely O=(M I). When talking about M, we have several choices:
This analogy is very tight. Indeed, at every point in state space, dE can be represented by a row vector. That’s the same as saying it can be represented by a nonsquare matrix. In the example we have been considering, the state of the system is assumed to be known as a function of four variables (S and the three components of X) so the gradient will be a matrix one row and four columns.
Operationally, you can (as far as I know) improve every equation in thermodynamics by replacing d with d. That is, we are replacing the idea of “infinitesimal” with the idea of oneform. To say the same thing in slightly different words: we are shifting attention away from the output of the machine (d) onto the machine itself (d). This has several advantages and no known disadvantages. The main advantage is that we have replaced a vague thing with a nonvague thing. The machine dE is a function of state, as are the other machines dP, dS, et cetera. We can draw pictures of them.
Any legitimate equation involving d has a corresponding legitimate equation involving d. Of course, if you start with a bogus equation and replace d with d, it’s still bogus, as discussed in section 3. The formalism of differential forms may make the preexisting errors more obvious, but you mustn’t blame it for causing the errors. Noticing an error is not the same as causing an error.
The notion of grady versus ungrady is not quite the same in the two formalisms: It makes perfect sense to talk about grady and ungrady oneforms. In contrast, as mentioned in section 2.2, it’s hard to talk about an ungrady differential, since if it’s ungrady, it’s not a differential at all, i.e. it’s not the gradient of anything.
Let’s forget about thermo for a moment, and let’s forget about oneforms. Let’s talk about plain old vector fields. In particular, imagine pressure as a function of position in (x,y,z) space. The pressure gradient is a vector field. I hope you agree that this vector field is perfectly well defined. There is a perfectly real vector at each (x,y,z) point.
A troublemaker might try to claim “the vector is merely a list of three numbers whose numerical values depend on the choice of basis, so the vector is really uncertain, not unique.” That’s a bogus argument. That’s not how we think of the physics. As explained in reference 6, we think of a physical vector as being more real than its components. The vector is a machine which, given a basis, will tell you the numerical values of its components. The components are nonunique, because they depend on the basis, but we attach physical reality to the vector, not the components.
The pressure gradient is a vector field. As we shall see in detail in section 5, there are two different kinds of vectors, leading to two perfectly good ways of representing the pressure gradient:
If you believe that the field of pointy vectors representing the pressure gradient is unique and welldefined, you ought to believe that the field of oneforms representing the same pressure gradient is equally unique and welldefined.
Given a nice Cartesian metric, in any basis the three numbers representing the pointy vector are numerically equal to the three numbers representing the oneform.
Returning to thermo: Let’s not leave behind all our physical and geometrical intuition when we start doing thermo. Thermo is weird, but it’s not so weird that we have to forget everything we know about vectors.
Oneforms are vectors. They are as real as the morefamiliar pointy vectors. To say the same thing another way, row vectors are just as real as column vectors.
If you think the pressure gradient dP is real and welldefined when P is a function of (x,y,z) you should think it is just as real and just as welldefined when P is a function of (V,T).
Let us briefly consider taking a finite step (as opposed to an infinitesimal differential). The definition of ΔE is:
ΔE := E_{A} − E_{B} (3) 
where B is the initial state A is the final state. That is, A stands for After and B stands for Before.
Before we can even consider expanding ΔE in terms of PΔV or whatever, we need to decide what kind of thing ΔE is.
Clearly ΔE is a scalar, just like E. It has the same dimensions as E. So far so good.
The problem is, ΔE is not a function of state. It is obviously a function of two states, namely state A and state B.
Let’s see if we can remedy this problem. First we perform a simple change of variable. Rather than using the two points A and B, we will use the single point (A+B)/2 and the direction A−B. That is, we can consider Δ(⋯) to be a step centered at (A+B)/2 and oriented in the A−B direction. This notion becomes precise if we take the limit as A approaches B. We now have something that is a function of state, the single state (A+B)/2 ... but it is no longer a scalar, since it involves a direction.
At this point we have essentially reinvented the exterior derivative dE. Whereas ΔE was a scalar function of two states, dE is a vector function of a single state.
Let’s review, by looking at some examples. Assuming the system is sufficiently well behaved that it has a welldefined temperature:
 (4) 
You may be accustomed to thinking of dS as the “limit” of ΔS, in the limit of a really small Δ ... but it must be emphasized that that is not the modern approach. You are much better off interpreting the symbols as follows:
These two itemized points are related: Changing the ordinate from scalar to vector is necessary, if we want to change the the abscissa from two states to a single state.
In addition to nice expressions such as equation 1, we alltoooften see dreadful expressions such as
 (5) 
As will be explained below, T dS is a perfectly fine oneform, but it is not a grady oneform, and therefore it cannot possibly equal dQ or d(anything), assuming we are talking about uncramped thermodynamics.
Note: Cramped thermodynamics is so severely restricted that it is impossible to describe a heat engine. Specifically, in a cramped situation there cannot be any thermodynamic cycles (or if there are, the area inside the “cycle” is zero). If you wish to write something like equation 5 and intend it to apply to cramped thermodynamics, you must make the restrictions explicit; otherwise it will be highly misleading.
The same goes for P dV and many similar quantities that show up in thermodynamics. They cannot possibly equal d(anything) ... assuming we are talking about uncramped thermodynamics.
Trying to find Q such that T dS would equal dQ is equivalent to trying to find the height of the water in an Escher waterfall, as shown in figure 6. It just can’t be done.
Of course, T dS does exist. You can call it almost anything you like, but you can’t call it dQ or d(anything). If you want to integrate T dS along some path, you must specify the precise path.
Again: P dV makes perfect sense as an ungrady oneform, but trying to write it as dW is tantamount to saying
There is no such thing as a W function, but if it did exist, and if it happened to be differentiable, then its derivative would equal P dV.
What a load of doubletalk! Yuuuck!
Constructive suggestion: If you are reading a book that uses dW and dQ, you can repair it using the following procedures:
As for the idea that T dS > T dS_{transferred} for an irreversible process, we cannot accept that at face value. For one thing, we would have problems at negative temperatures. We can fix that by getting rid of the T on both sides of the equation. Another problem is that according to the modern interpretation of the symbols, dS is a vector, and it is not possible to define a “greaterthan” relation involving vectors. That is to say, vectors are not well ordered. We can fix this by integrating. The relevant equation is:
 (6) 
for some definite path Γ. We need Γ to specify the “forward” direction of the transformation; otherwise the inequality wouldn’t mean anything. We have an inequality, not an equality, because we are considering an irreversible process.
At the end of the day, we find that the assertion that «T dS is greater than dQ» is just a complicated and defective way of saying that the irreversible process created some entropy from scratch.
Note: The underlying idea is that for an irreversible process, entropy is not conserved, so we don’t have continuity of flow. Therefore the classical approach was a bad idea to begin with, because it tried to define entropy in terms of heat divided by temperature, and tried to define heat in terms of flow. That was a bad idea on practical grounds and pedagogical grounds, in the case where entropy is being created from scratch rather than flowing. It was a bad idea on conceptual grounds, even before it was expressed using symbols such as dQ that don’t make sense on mathematical grounds.
Beware: The classical thermo books are inconsistent. Even within a single book, even within a single chapter, sometimes they use dQ to mean the entire T dS and sometimes only the T dS_{transferred}.
We define differential forms to have the following properties:
 (7) 
for arbitrary scalarvalued functions f, g, et cetera. So we are using the set {[dx_{i}]} as a basis.
There exist pointy vectors, which are relatively familiar to most people. They can be represented by an arrow with a tip and a tail. In the language of linear algebra, these are column vectors.  There exist oneforms, which are less familiar to most people. They can be represented by contourlines and/or fishscales. In the language of linear algebra, these are row vectors. 
As we shall see, pointy vectors and oneforms have quite a few properties in common, but there are also some crucial differences, so be careful. Item 12 discusses one of the differences you need to watch out for.
df(x_{1}, x_{2}, ⋯) = 
 [dx_{i}] 
 ⎪ ⎪ ⎪ ⎪ 
 (8) 
where in the ith term of the sum, the partial derivative holds constant all the arguments to f() except for the x_{i} argument. The notation for this is clumsy, but the idea is important. The partial derivative is really a directional derivative in a direction specified by holding constant an entire set of variables except for one … so it is crucial to know the entire set, not just the one variable that is nominally being differentiated with respect to. For details on this, including ways to visualize what it means, see reference 7.
An example is shown in figure 7. The intensity of the shading depicts the height of the function F := sin(x_{1})sin(x_{2}) while the contourlines depict the exterior derivative dF.
 (9) 
which is convenient. It simplifies the notation.
Technically speaking, [dx_{1}] exists by fiat, according to item 2, while dx_{1} is something you can calculate according to equation 8. On a daytoday basis you don’t care about the distinction, but it would have been cheating to assume they are equal. We needed to keep them conceptually distinct just long enough to prove they are numerically equal.
Suppose we want to visualize the gradient of some landscape. If you visualize the gradient as a pointy vector, it points uphill. In many cases, though, you are better off visualizing the gradient as a oneform, corresponding to contour lines that run across the slope.
You can judge the magnitude of the 1form according to how closely packed the contour lines are. Closelypacked contours represent a largemagnitude 1form. To say the same thing the other way, the spacing between contours is inversely related to the magnitude of the oneform.
Contour lines have the wonderful property that they behave properly under a change of coordinates: if you take a landscape such as the one in figure 7 and stretch it horizontally (keeping the altitudes the same) as shown in figure 8, the slopes become less. The contour lines on the corresponding topographic map spread out by the same stretch factor, as they should, to represent the lesser slope. In contrast, if you try to represent the gradient by pointy vectors, the representation is completely broken by a change in coordinates. As you stretch the map, position vectors and displacement vectors get longer, but gradient vectors have to shorter, to represent the lesser slope. Therefore we need different representations. We represent positions and displacements using pointy vectors, but we represent gradients using 1forms.
To say the same thing the other way, representing a gradient using a pointy vector would be a bad idea; such vectors would not behave properly. They would not be “attached” to the landscape the way contour lines are.
Of course, pointy vectors are needed also; they are appropriate for representing the location of one point relative to another in this landscape. These location vectors do stretch as they should when we stretch the map. An example of this is shown in red in figure 8.
It is important to clearly distinguish the two types of vector:
Type of vector:  pointy vector  oneform  
Another name:  contravariant vector  covariant vector  
Represented by:  column vector  row vector  
Dirac notation:  ket: ⋯⟩  bra: ⟨⋯  
Example:  displacement  gradient  
Stretching the map:  increases distance  decreases steepness 
Consider the contrast:
In some spaces, we have a metric. That is, we have a dot product. That allows us to determine the length of a vector, and to determine the angle between two vectors. In such a space, we have a geometry (not just a topology). Ordinary Cartesian (x,y,z) space is a familiar example.  There are other spaces where we do not have a metric. We do not have a dot product. We do not have any notion of angle, and not much notion of length or distance. We have a topology, but not a geometry. Thermodynamic statespace is an important example. We can measure the distance (in units of S) between contours of constant S, but we cannot compare that to any distance in any other direction. 
In such a space, we can use the metric to transpose a vector. Transposing a row vector creates the corresponding column vector and vice versa. That gives us a onetoone correspondence: For any pointy vector you can find a corresponding oneform and vice versa.  In a nonmetric space, there is not any way of converting 1forms to pointy vectors or vice versa. There is not any way of finding a 1form that uniquely “corresponds” to a given pointy vector or vice versa. 
Technically, the row vectors always live in their own space, while the column vectors always live in another space. 
Given a metric, the two spaces are isomorphic, and people usually don’t bother to distinguish them.  Without a metric, the two spaces remain distinct. It makes sense to visualize dE as a oneform, i.e. as contours of constant E ... but it does not make sense to visualize dE as any kind of pointy vector. 
A 1form has a direction, but we cannot measure the angle between two such directions. You can say that we have a topology but not a geometry. This sounds like a terrible limitation, but it is actually the right thing for thermodynamics, because typically you have no way of knowing whether dS is “perpendicular” to dV or not, and it causes all sorts of trouble if you use a mathematical formalism that assumes you can measure angles when you can’t.
Among other things, this means that we do not require the coordinates x_{1}, x_{2}, ⋯ to be mutually perpendicular. Since there is no notion of distance or angle, so we could not make them perpendicular even if we wanted to. See figure 1.
dx_{i} ∧ dx_{j} = −dx_{j} ∧ dx_{i} (10) 
for all (i, j).
Nongrady force fields are common in the real world. See reference 1 for more about how to visualize such things.
A conspicuously ungrady form w is shown in figure 9. You can imagine that this represents the 1form w := PdV (aka “work”) in a slightlyidealized heat engine. The direction of the 1form (i.e. the uphill direction) is everywhere counterclockwise. This w is a perfectly fine 1form, but you cannot write w = dW because w cannot be the slope of any potential W. The concept of slope is locally welldefined, and you can integrate the slope along a particular path from A to B, but you cannot use this integral to define a potential difference W(B) − W(A) because the result depends very much on which path you choose. This is like Escher’s famous “Waterfall” shown in figure 6.
To repeat: You are free to write w = PdV. That is a perfectly fine 1form, welldefined at every point in the state space. In contrast, it is not OK to write w = dW or PdV = dW, because that cannot be welldefined throughout state space, except perhaps in trivial cases. (You might be able to define something like that on a onedimensional subspace, along a particular path through the system, but then you would need to decorate “W” with all sorts of subscripts to indicate exactly which subspace you are talking about.)
A more subtle example of an ungrady form is discussed in item 23 below.
d(A ∧ B) = dA ∧ B + (−1)^{k} A ∧ dB (11) 
where A has grade=k.
dd = 0 (12) 
This important result can be expressed in words: “the boundary of any boundary is zero”.
Before we explain why this is so, we should emphasize that dd is not the most general secondderivative operator. Rather, it is the antisymmetric part of the second derivative, in accordance with equation 11. So what we are saying is that the antisymmetric part of the second derivative vanishes.
The antisymmetric piece of the second derivative necessarily vanishes, because of the mathematicallyguaranteed symmetry of mixed partial derivatives:

 f ≡ 

 f (13) 
This is true for all f, assuming f is sufficiently differentiable, so that the indicated derivatives exist.
Figure 10 and Figure 11 show what’s going on. We will use these figures to discuss finite steps (as indicated by Δ) instead of infinitesimals, but the same ideas apply in the limit of very small steps. In particular, Δxy means to take a step toward increasing x along a contour of constant y. Similarly Δyx means to take a step toward increasing y along a contour of constant x.
In accordance with the usual operatorprecedence rules, the interpretation of the LHS of equation 13 is:

 f means  ⎛ ⎜ ⎜ ⎝ 
 ⎛ ⎜ ⎜ ⎝ 
 (f)  ⎞ ⎟ ⎟ ⎠  ⎞ ⎟ ⎟ ⎠  (14) 
That is, we work from right to left, first taking a step toward increasing y along a contour of constant x, then taking a step toward increasing x along a contour of constant y. For example, in the figure, this would correspond to proceeding clockwise from (0,0) via (0,1) to (1,1).
Meanwhile, the RHS of equation 13 tells us to proceed counterclockwise in the figure, from (0,0) via (1,0) to (1,1). The point is that we get to same point either way. That is, the clockwise trip we just took, together with the counterclockwise trip, form a “closed” figure.
This result is nontrivial. Although the boundary of a boundary is zero, the boundary of “something else” is not necessarily zero. For example:
 (15) 
Forms that are closed, including figure 7 and figure 12, have the property that the “contour” lines in one region mesh nicely with the lines in adjacent regions. In a nonclosed form such as figure 9, the meshing fails somewhere. (Commonly it fails everywhere.)
Beware that this notion of “closed oneform” is not equivalent to the notion of “closed set” (containing its limit points) nor to the notion of “closed manifold” (compact without boundary). See reference 8 and reference 9.
∫ 
 dF = F(B) − F(A) (16) 
The meaning is simple: the integral measures the number of contours that you cross in going from point A to point B. For a grady 1form, this number is independent of the path you take along the way from A to B.
This integral is, of course, a linear operator.
B = f_{i}(x) dx_{i} (17) 
We are using the Einstein summation convention, i.e. implied summation over repeated indices, such as index i in this equation.
As explained in section 6, the integral of this is:
 (18) 
To understand how we integrate a oneform B along the curve C, start by breaking the curve into small segments and integrating each segment separately:
 (19) 
and if f is a sufficiently smooth function and if C is a sufficiently smooth curve, and if the points {θ1, θ2, ⋯} are sufficiently close together, then we can treat f as being locally constant and pull it out front of the integrals:
 (20) 
Now we have grady forms inside the integral, so we can integrate them immediately using equation 16. We get
 (21) 
where we have described the point C(θ) using an expansion in terms of the basis vectors:
C(θ) = C_{i}(θ) x_{i} (22) 
Equation 21 is beginning to look like a familiar Riemann integral. In fact it is just
 (23) 
In equation 23, do not think of the integrand as a dot product, even though it involves the same sumofproducts you would use for evaluating f · ∂C/∂θ. We do not have a dot product. The operation here is a contraction. A contraction involves a oneform acting on a pointy vector. In this case the oneform is f and the pointy vector is ∂C/∂θ. In equation 21, you can visualize [C_{i}(θ2) − C_{i}(θ1)] as a pointy vector with its tip at C(θ2) and its tail at C(θ1).
We can carry out the contraction of a oneform with a pointy vector. We cannot carry out the dot product of two oneforms, nor the dot product of two pointy vectors. Think of oneforms as 1×D matrices (one row and D columns) and pointy vectors as D×1 matrices.
As an example, consider integrating the oneform
f := 
 dx_{1} + 
 dx_{2} (24) 
where r := √(x_{1}^{2} + x_{2}^{2}). This oneform is depicted, with fair accuracy, in figure 9. We wish to integrate it along a curve C which is a circular path of radius R, centered on the origin, so that along C:
 (25) 
Plugging in to equation 23 we find
 (26) 
(beware: at some points this assumes the existence of a dot product.)
Copyright © 2003 jsd