Copyright © 2003 jsd

Thermodynamics and Differential Forms
John Denker

1  Overview

In thermodynamics, it is common to see equations of the form

dE =  T dS − F·dX              (1)

where E is the energy, T is the temperature, and S is the entropy. In this example, F is the force, and X is a three-component vector specifying the position.

We shall see that the best approach is to interpret the d symbol as the derivative operator. Specifically, dS is the gradient of S. We shall explore various ways of visualizing a gradient... and also ways of visualizing something like TdS that is normally not the gradient of any function. (See reference 1 for details on this.)

This can be formalized using the modern notion of exterior derivative, although if that notion is not familiar to you, don’t worry about it. Everything we need to do can be explained in terms of plain old partial derivatives.

*   Contents

1  Overview
2  Thermodynamic Properties – Real
2.1  Some Examples
2.2  Discussion
2.3  Exterior Derivative versus Differential
2.4  Exterior Derivative versus Gradient
2.5  Exterior Derivative versus Finite Difference; Function of State versus Not
3  Thermodynamic Properties – Unreal
4  Procedure for Extirpating dW and dQ
5  Basic Properties of Differential Forms
6  Integrating a One-Form
6.1  Explanation
6.2  No Dot Product
6.3  Example
7  References

2  Thermodynamic Properties – Real

2.1  Some Examples

In thermodynamics, it is common to have a large number of variables that are not all linearly independent. Such a situation is illustrated in figure 1.

Figure 1: Contours of Constant Value (for three different variables)

The idea is that the thermodynamic state of the system is described by a point in some abstract D-dimensional space, but we have more than D variables that we are interested in. Figure 1 portrays a two-dimensional space (D=2), with three variables. You can usually choose D of them to form a linearly-independent basis set, but then the rest of them will be linearly dependent, because of various constraints (the equation of state, conservation laws, boundary conditions, or whatever).

Note that figure 1 does not show any axes. This is 100% intentional. There is no red axis, green axis, or blue axis; instead there are contours of constant value for the red variable, the green variable, and the blue variable. For more about the importance of such contours, and the unimportance of axes, see reference 2. The so-called red axis would point in the so-called direction of increasing value of the red variable, but in fact there are many directions in which the red variable increases.

In such a situation, if we stay away from singularities, there is no important distinction between “independent” variables and “dependent” variables. Some people say you are free to choose any set of D nonsingular variables and designate them as your “independent” variables ... but usually that’s not worth the trouble, and – as we shall see shortly – it is more convenient and more logical to forget about “independent” versus “dependent” and treat all variables on the same footing.

Singularities can occur in various ways. A familiar example can be found in the middle of a phase transition, such as an ice/water mixture. In a diagram such as figure 1, a typical symptom would be contour lines running together, i.e. the spacing between lines going to zero somewhere.

See reference 3 for an overview of the laws of thermodynamics. Many of the key results in thermodynamics can be nicely formulated using expressions involving the d operator, such as equation 1 ===

In order to make sense of this, we need to know what kind of things are dE, dS, T dS, et cetera. We would like to be able to visualize them. It turns out that the best way to think about such things is in terms of differential forms in general and one-forms in particular. The details of how to deal with differential forms is explained in section 5.

But before we get into details, let’s look at some examples.

Consider some gas in a piston. The number of moles of gas remains fixed. We can use the variables S and V to specify where we are in the state space of the system. (Other variables work fine, too, but let’s use those for now.)

Figure 2 shows dV as a function of state. (See reference 3 for what we mean by “function of state”.) Obviously dV is a rather simple one-form. It is in fact a constant everywhere. It denotes a uniform slope up to the right of the diagram. Contours of constant V run vertically in the diagram.

Figure 2: The One-Form dV

Similarly, figure 3 shows dT as a function of state. This, too, is constant everywhere. It indicates a uniform slope up toward the top of the page. Contours of constant T run left-to-right in the diagram.

Figure 3: The One-Form dT (and dE)

Note that the diagram of dT is also a diagram of dE, because for an ideal gas, E is just proportional to T.

Figure 4: The One-Form dP

Things get more interesting in figure 4, which shows dP as a function of state. (We temporarily assume we are dealing with an ideal gas.) Since dP is the gradient of something, we call it a grady one-form, in accordance with the definition given in item 20. We can see that dP is not a constant. It gets very steep when the temperature is high and/or the gas is squeezed into a small volume. For an ideal gas, the contours of constant P are rays through the origin. For a non-ideal gas, the figure would be qualitatively similar but would differ in details.

The one-forms dS, dT, dV, and dP are all grady one-forms, so you can integrate them globally, without specifying the path along which the integral is taken. When these variables take on the values implied by figure 4, if you integrate them “by eye” you can see that T is large along the top of the diagram, V is large along the right edge, and P is large when the temperature is high and/or the volume is small.

Mathematicians have a name for this d operator, namely the exterior derivative. But if that doesn’t mean anything to you, don’t worry about it. For more information about such things, see reference 4 and reference 5.

2.2  Discussion

Here’s a point that is just a technicality now, but will be important later: These diagrams are meant to portray the one-forms directly. They portray the corresponding scalars T, V, and P only indirectly.

Figure 5 shows the difference between a grady one-form and an ungrady one-form.

Figure 5: dS is Grady, TdS is Not

As you can see in on the left side of the figure, the quantity dS is grady. If you integrate clockwise around the loop as shown, the net number of upward steps is zero. This is related to the fact that we can assign an unambiguous height (S) to each point in (T,S) space.   In contrast, as you can see on the right side of the diagram, the quantity TdS is not grady. If you integrate clockwise around the loop as shown, there are considerably more upward steps than downward steps. There is no hope of assigning a height “Q” to points in (T,S) space.

Be warned that in the mathematical literature, what we are calling ungrady one-forms are called “inexact” one-forms. The two terms are entirely synonymous. A one-form is called “exact” if and only if it is the gradient of something. We avoid the terms “exact” and “inexact” because they are too easily misunderstood. In particular:

Pedagogical remark and suggestion: The idea of representing one-forms in terms of overlapping “fish scales” is not restricted to drawings. It is possible to arrange napkins or playing-cards in a loop such that each one is tucked below the next in clockwise order. This provides a useful hands-on model of an inexact one-form. Counting “steps up” minus “steps down” along a path is a model of integrating along the path.

2.3  Exterior Derivative versus Differential

You may be wondering what is the relationship between the d operator as seen in equation 1 and the plain old d that appears in the corresponding equation in your grandfather’s thermo book:

dE =  T dS − P dV              (2)

The answer goes like this: Traditionally, dE has been called a “differential” and interpreted as a small change in E resulting from some unspecified small step in state space. It’s hard to think of dE as being a function at all, let alone a function of state, because the step is arbitrary. The magnitude and direction of the step are unspecified.

In contrast, dE is to be interpreted as a machine that says: If you give me a vector that precisely specifies the direction and magnitude of a step in state space, I’ll give you the resulting change in E. If we apply this machine to an uncertain input we will get an uncertain output. But that doesn’t mean that the machine is arbitrary. The machine itself is completely non-arbitrary. The machine is a function of state.

By way of analogy: An ordinary matrix M is a machine that says: If you give me an input vector I, I will give you an output vector O, namely O=(M I). When talking about M, we have several choices:

This analogy is very tight. Indeed, at every point in state space, dE can be represented by a row vector. That’s the same as saying it can be represented by a non-square matrix. In the example we have been considering, the state of the system is assumed to be known as a function of four variables (S and the three components of X) so the gradient will be a matrix one row and four columns.

Operationally, you can (as far as I know) improve every equation in thermodynamics by replacing d with d. That is, we are replacing the idea of “infinitesimal” with the idea of one-form. To say the same thing in slightly different words: we are shifting attention away from the output of the machine (d) onto the machine itself (d). This has several advantages and no known disadvantages. The main advantage is that we have replaced a vague thing with a non-vague thing. The machine dE is a function of state, as are the other machines dP, dS, et cetera. We can draw pictures of them.

Any legitimate equation involving d has a corresponding legitimate equation involving d. Of course, if you start with a bogus equation and replace d with d, it’s still bogus, as discussed in section 3. The formalism of differential forms may make the pre-existing errors more obvious, but you mustn’t blame it for causing the errors. Noticing an error is not the same as causing an error.

The notion of grady versus ungrady is not quite the same in the two formalisms: It makes perfect sense to talk about grady and ungrady one-forms. In contrast, as mentioned in section 2.2, it’s hard to talk about an ungrady differential, since if it’s ungrady, it’s not a differential at all, i.e. it’s not the gradient of anything.

2.4  Exterior Derivative versus Gradient

Let’s forget about thermo for a moment, and let’s forget about one-forms. Let’s talk about plain old vector fields. In particular, imagine pressure as a function of position in (x,y,z) space. The pressure gradient is a vector field. I hope you agree that this vector field is perfectly well defined. There is a perfectly real vector at each (x,y,z) point.

A troublemaker might try to claim “the vector is merely a list of three numbers whose numerical values depend on the choice of basis, so the vector is really uncertain, not unique.” That’s a bogus argument. That’s not how we think of the physics. As explained in reference 6, we think of a physical vector as being more real than its components. The vector is a machine which, given a basis, will tell you the numerical values of its components. The components are non-unique, because they depend on the basis, but we attach physical reality to the vector, not the components.

The pressure gradient is a vector field. As we shall see in detail in section 5, there are two different kinds of vectors, leading to two perfectly good ways of representing the pressure gradient:

If you believe that the field of pointy vectors representing the pressure gradient is unique and well-defined, you ought to believe that the field of one-forms representing the same pressure gradient is equally unique and well-defined.

Given a nice Cartesian metric, in any basis the three numbers representing the pointy vector are numerically equal to the three numbers representing the one-form.

Returning to thermo: Let’s not leave behind all our physical and geometrical intuition when we start doing thermo. Thermo is weird, but it’s not so weird that we have to forget everything we know about vectors.

One-forms are vectors. They are as real as the more-familiar pointy vectors. To say the same thing another way, row vectors are just as real as column vectors.

If you think the pressure gradient dP is real and well-defined when P is a function of (x,y,z) you should think it is just as real and just as well-defined when P is a function of (V,T).

2.5  Exterior Derivative versus Finite Difference; Function of State versus Not

Let us briefly consider taking a finite step (as opposed to an infinitesimal differential). The definition of ΔE is:

ΔE := EA − EB              (3)

where B is the initial state A is the final state. That is, A stands for After and B stands for Before.

Before we can even consider expanding ΔE in terms of PΔV or whatever, we need to decide what kind of thing ΔE is.

Clearly ΔE is a scalar, just like E. It has the same dimensions as E. So far so good.

The problem is, ΔE is not a function of state. It is obviously a function of two states, namely state A and state B.

Let’s see if we can remedy this problem. First we perform a simple change of variable. Rather than using the two points A and B, we will use the single point (A+B)/2 and the direction AB. That is, we can consider Δ(⋯) to be a step centered at (A+B)/2 and oriented in the AB direction. This notion becomes precise if we take the limit as A approaches B. We now have something that is a function of state, the single state (A+B)/2 ... but it is no longer a scalar, since it involves a direction.

At this point we have essentially reinvented the exterior derivative dE. Whereas ΔE was a scalar function of two states, dE is a vector function of a single state.

Let’s review, by looking at some examples. Assuming the system is sufficiently well behaved that it has a well-defined temperature:

name     abscissa → grade of ordinate  type of vector
S  function of  state → scalar  
T  function of  state → scalar  
ΔS  function of  two states → scalar  
dS  function of  state → vector  grady one-form
T dS  function of  state → vector  ungrady one-form

You may be accustomed to thinking of dS as the “limit” of ΔS, in the limit of a really small Δ ... but it must be emphasized that that is not the modern approach. You are much better off interpreting the symbols as follows:

These two itemized points are related: Changing the ordinate from scalar to vector is necessary, if we want to change the the abscissa from two states to a single state.

3  Thermodynamic Properties – Unreal

In addition to nice expressions such as equation 1, we all-too-often see dreadful expressions such as

T dS = dQ      (allegedly) 
P dV = dW      (allegedly)

As will be explained below, T dS is a perfectly fine one-form, but it is not a grady one-form, and therefore it cannot possibly equal dQ or d(anything), assuming we are talking about uncramped thermodynamics.

Note: Cramped thermodynamics is so severely restricted that it is impossible to describe a heat engine. Specifically, in a cramped situation there cannot be any thermodynamic cycles (or if there are, the area inside the “cycle” is zero). If you wish to write something like equation 5 and intend it to apply to cramped thermodynamics, you must make the restrictions explicit; otherwise it will be highly misleading.

The same goes for P dV and many similar quantities that show up in thermodynamics. They cannot possibly equal d(anything) ... assuming we are talking about uncramped thermodynamics.

Trying to find Q such that T dS would equal dQ is equivalent to trying to find the height of the water in an Escher waterfall, as shown in figure 6. It just can’t be done.

Figure 6: Waterfall, by M. C. Escher (1961)

Of course, T dS does exist. You can call it almost anything you like, but you can’t call it dQ or d(anything). If you want to integrate T dS along some path, you must specify the precise path.

Again: P dV makes perfect sense as an ungrady one-form, but trying to write it as dW is tantamount to saying

There is no such thing as a W function, but if it did exist, and if it happened to be differentiable, then its derivative would equal P dV.

What a load of double-talk! Yuuuck!

4  Procedure for Extirpating dW and dQ

Constructive suggestion: If you are reading a book that uses dW and dQ, you can repair it using the following procedures:

5  Basic Properties of Differential Forms

We define differential forms to have the following properties:

1.    We assume the existence of a space with coordinates x1, x2, ⋯. In thermodynamics we might choose the coordinates to be V, S, ⋯. You can choose almost any coordinates you like; the approach outlined here works for any set of coordinates, assuming they are nonsingular, along with a few other mild restrictions. The coordinates do not need to be mutually perpendicular; see item 15.

2.    For each i, we postulate the existence of something denoted [dxi] and call it a differential form. (We shall soon prove that we can do without the square brackets, but for the moment they are part of the definition.)

3.    Every differential form has a grade. The forms just mentioned have grade=1. They are called 1-forms for short.

4.    A plain old scalar is considered a grade=0 form. This includes scalar-valued functions f(x1, x2, ⋯).

5.    In simple cases at least, you may interpret a 1-form as measuring a slope. For example, [dx1] represents something with unit slope, sloping up in the x1 direction, as exemplified in figure 2. A good example of a more complicated 1-form is geographic slope, as depicted by the contour lines on a topographic map. Closely-spaced contour lines represent a steep slope. See item 10 for more on this.

6.    The most general 1-form is a linear combination of other 1-forms, such as

B =   f(x1x2, ⋯) [dx1]
   g(x1x2, ⋯) [dx2]
   + ⋯

for arbitrary scalar-valued functions f, g, et cetera. So we are using the set {[dxi]} as a basis.

7.    The alert reader may have noticed that forms have a magnitude and direction, and they behave just like vectors. That’s true and important. If you define the notion of “vector” properly, one-forms are definitely vectors. However it is crucial to keep in mind the distinction:

There exist pointy vectors, which are relatively familiar to most people. They can be represented by an arrow with a tip and a tail. In the language of linear algebra, these are column vectors.   There exist one-forms, which are less familiar to most people. They can be represented by contour-lines and/or fish-scales. In the language of linear algebra, these are row vectors.

As we shall see, pointy vectors and one-forms have quite a few properties in common, but there are also some crucial differences, so be careful. Item 12 discusses one of the differences you need to watch out for.

8.    We define the exterior derivative operator d applied to a scalar function as follows, in terms of our basis set {[dxi]} and the chain rule:

df(x1x2, ⋯) =
∂ f 
∂ xi


{all xj except xi}

where in the ith term of the sum, the partial derivative holds constant all the arguments to f() except for the xi argument. The notation for this is clumsy, but the idea is important. The partial derivative is really a directional derivative in a direction specified by holding constant an entire set of variables except for one … so it is crucial to know the entire set, not just the one variable that is nominally being differentiated with respect to. For details on this, including ways to visualize what it means, see reference 7.

An example is shown in figure 7. The intensity of the shading depicts the height of the function F := sin(x1)sin(x2) while the contour-lines depict the exterior derivative dF.

Figure 7: A Function and its Exterior Derivative

9.    If you choose f(x1, x2, ⋯) = x1 in equation 8, you can easily prove that

dx1 = [dx1]

which is convenient. It simplifies the notation.

Technically speaking, [dx1] exists by fiat, according to item 2, while dx1 is something you can calculate according to equation 8. On a day-to-day basis you don’t care about the distinction, but it would have been cheating to assume they are equal. We needed to keep them conceptually distinct just long enough to prove they are numerically equal.

10.    You can visualize a pointy-vector as a little arrow with a “tip” and a “tail”, but you should not visualize a 1-form the same way.

Suppose we want to visualize the gradient of some landscape. If you visualize the gradient as a pointy vector, it points uphill. In many cases, though, you are better off visualizing the gradient as a one-form, corresponding to contour lines that run across the slope.

You can judge the magnitude of the 1-form according to how closely packed the contour lines are. Closely-packed contours represent a large-magnitude 1-form. To say the same thing the other way, the spacing between contours is inversely related to the magnitude of the one-form.

Contour lines have the wonderful property that they behave properly under a change of coordinates: if you take a landscape such as the one in figure 7 and stretch it horizontally (keeping the altitudes the same) as shown in figure 8, the slopes become less. The contour lines on the corresponding topographic map spread out by the same stretch factor, as they should, to represent the lesser slope. In contrast, if you try to represent the gradient by pointy vectors, the representation is completely broken by a change in coordinates. As you stretch the map, position vectors and displacement vectors get longer, but gradient vectors have to shorter, to represent the lesser slope. Therefore we need different representations. We represent positions and displacements using pointy vectors, but we represent gradients using 1-forms.

To say the same thing the other way, representing a gradient using a pointy vector would be a bad idea; such vectors would not behave properly. They would not be “attached” to the landscape the way contour lines are.

Figure 8: Stretching a Coordinate Increases the Distance and Decreases the Steepness

Of course, pointy vectors are needed also; they are appropriate for representing the location of one point relative to another in this landscape. These location vectors do stretch as they should when we stretch the map. An example of this is shown in red in figure 8.

It is important to clearly distinguish the two types of vector:

Type of vector:   pointy vector   one-form
Another name:   contravariant vector   covariant vector
Represented by:   column vector   row vector
Dirac notation:   ket:  |⋯⟩   bra:  ⟨⋯|
Example:   displacement   gradient
Stretching the map:   increases distance   decreases steepness

11.    Some remarks about the terminology:


Consider the contrast:

In some spaces, we have a metric. That is, we have a dot product. That allows us to determine the length of a vector, and to determine the angle between two vectors. In such a space, we have a geometry (not just a topology). Ordinary Cartesian (x,y,z) space is a familiar example.   There are other spaces where we do not have a metric. We do not have a dot product. We do not have any notion of angle, and not much notion of length or distance. We have a topology, but not a geometry. Thermodynamic state-space is an important example. We can measure the distance (in units of S) between contours of constant S, but we cannot compare that to any distance in any other direction.

In such a space, we can use the metric to transpose a vector. Transposing a row vector creates the corresponding column vector and vice versa. That gives us a one-to-one correspondence: For any pointy vector you can find a corresponding one-form and vice versa.   In a non-metric space, there is not any way of converting 1-forms to pointy vectors or vice versa. There is not any way of finding a 1-form that uniquely “corresponds” to a given pointy vector or vice versa.

Technically, the row vectors always live in their own space, while the column vectors always live in another space.

Given a metric, the two spaces are isomorphic, and people usually don’t bother to distinguish them.   Without a metric, the two spaces remain distinct. It makes sense to visualize dE as a one-form, i.e. as contours of constant E ... but it does not make sense to visualize dE as any kind of pointy vector.

13.    Note that when we stretch the map, as in figure 8, the following topological property is preserved: The red arrow crosses three contours. The “number of crossings” is three, both before and after the stretch. After the stretch, the steepness is less but the distance is greater. More precisely, the product between the gradient-vector and the separation-vector is unchanged. (This type of product is called a contraction, as discussed in section 6.2.) This is related to the fact that in this example, height is a function of state. Height is unchanged by horizontal stretching. The tip of the red arrow is three units higher than the tail, both before and after the stretch.

14.    In three dimensions, rather than having contour “lines” we have contour “shells”, like the layers of an onion. If T is the temperature in the room, you can visualize dT as shells, each shell representing a constant temperature. More generally, in D dimensions, the contours are objects with dimensionality D−1.

15.    There are many situations – including thermodynamics – where it is not possible to define any notion of length or angle. To say the same thing in mathematical terms, it is not possible to define a dot product.

A 1-form has a direction, but we cannot measure the angle between two such directions. You can say that we have a topology but not a geometry. This sounds like a terrible limitation, but it is actually the right thing for thermodynamics, because typically you have no way of knowing whether dS is “perpendicular” to dV or not, and it causes all sorts of trouble if you use a mathematical formalism that assumes you can measure angles when you can’t.

Among other things, this means that we do not require the coordinates x1, x2, ⋯ to be mutually perpendicular. Since there is no notion of distance or angle, so we could not make them perpendicular even if we wanted to. See figure 1.

16.    The wedge product of two 1-forms is written dxidxj and is a grade=2 differential form, called a 2-form for short.

17.    The wedge product is associative: A ∧ (BC) = (AB) ∧ C. This means we can take the wedge product of forms of any grade without worrying about parentheses.

18.    The wedge product between grade=1 forms is antisymmetric:

dxi ∧ dxj = −dxj ∧ dxi              (10)

for all (i, j).

19.    As a consequence of the foregoing, the wedge product between an odd-grade form and an even-grade form is symmetric: A ∧ dx1 = dx1A if A is a scalar or a 2-form, even though it is antisymmetric if A is a 1-form. For this reason, it is OK to omit the wedge symbol when multiplying something by a scalar, as in equation 7.

20.    A differential form F is called grady if it is the exterior derivative of some other form: F = dφ. A good example from thermodynamics is the form PdV + VdP, which is grady because it equals d(PV). In contrast, PdV by itself is not grady.

Non-grady force fields are common in the real world. See reference 1 for more about how to visualize such things.

Figure 9: A One-Form that is Not Grady

A conspicuously ungrady form w is shown in figure 9. You can imagine that this represents the 1-form w := PdV (aka “work”) in a slightly-idealized heat engine. The direction of the 1-form (i.e. the uphill direction) is everywhere counterclockwise. This w is a perfectly fine 1-form, but you cannot write w = dW because w cannot be the slope of any potential W. The concept of slope is locally well-defined, and you can integrate the slope along a particular path from A to B, but you cannot use this integral to define a potential difference W(B) − W(A) because the result depends very much on which path you choose. This is like Escher’s famous “Waterfall” shown in figure 6.

To repeat: You are free to write w = PdV. That is a perfectly fine 1-form, well-defined at every point in the state space. In contrast, it is not OK to write w = dW or PdV = dW, because that cannot be well-defined throughout state space, except perhaps in trivial cases. (You might be able to define something like that on a one-dimensional subspace, along a particular path through the system, but then you would need to decorate “W” with all sorts of subscripts to indicate exactly which subspace you are talking about.)

A more subtle example of an ungrady form is discussed in item 23 below.

21.    The exterior derivative d applied to a grade=1 or higher object obeys a product rule (as you would expect for a derivative) with anti-symmetry (as you would expect for a wedge product), namely

d(A ∧ B) = dA ∧ B + (−1)k A ∧ dB              (11)

where A has grade=k.

22.    As a consequence of the definition in item 21, it turns out that the operator d applied twice to any potential f is necessarily zero. That is ddf≡0 for any f. Since f doesn’t matter, we can drop it and write the elegant operator equation:

dd = 0              (12)

This important result can be expressed in words: “the boundary of any boundary is zero”.

Before we explain why this is so, we should emphasize that dd is not the most general second-derivative operator. Rather, it is the antisymmetric part of the second derivative, in accordance with equation 11. So what we are saying is that the antisymmetric part of the second derivative vanishes.

The antisymmetric piece of the second derivative necessarily vanishes, because of the mathematically-guaranteed symmetry of mixed partial derivatives:

 f  ≡ 
 f              (13)

This is true for all f, assuming f is sufficiently differentiable, so that the indicated derivatives exist.

Figure 10 and Figure 11 show what’s going on. We will use these figures to discuss finite steps (as indicated by Δ) instead of infinitesimals, but the same ideas apply in the limit of very small steps. In particular, Δx|y means to take a step toward increasing x along a contour of constant y. Similarly Δy|x means to take a step toward increasing y along a contour of constant x.

Figure 10: Boundary of a Boundary: X then Y
Figure 11: Boundary of a Boundary: Y then X

In accordance with the usual operator-precedence rules, the interpretation of the LHS of equation 13 is:

 f      means     




That is, we work from right to left, first taking a step toward increasing y along a contour of constant x, then taking a step toward increasing x along a contour of constant y. For example, in the figure, this would correspond to proceeding clockwise from (0,0) via (0,1) to (1,1).

Meanwhile, the RHS of equation 13 tells us to proceed counterclockwise in the figure, from (0,0) via (1,0) to (1,1). The point is that we get to same point either way. That is, the clockwise trip we just took, together with the counterclockwise trip, form a “closed” figure.

This result is nontrivial. Although the boundary of a boundary is zero, the boundary of “something else” is not necessarily zero. For example:

       d(dS) = 0         
       d(TdS) = dTdS + T ddS   
  = dTdS     
  = not necessarily 0

23.    A form F is called closed if its exterior derivative dF vanishes. By equation 12, we know that every grady form is closed, but the converse does not hold. In a universe with periodic boundary conditions, such as the cylinder shown in figure 12, you can have a closed one-form that is not grady. The form points everywhere counterclockwise as you go around the universe. Rather than fish-scales, in this figure we use color-coded “contour” lines. The “contour” lines are shown with solid blue on their positive (“uphill”) side and dashed red on their negative (“downhill”) side. The form is in fact a constant everywhere on the cylinder, so it satisfies the boundary conditions and satisfies dF=0.
Figure 12: A Form that is Closed but Not Grady

Forms that are closed, including figure 7 and figure 12, have the property that the “contour” lines in one region mesh nicely with the lines in adjacent regions. In a non-closed form such as figure 9, the meshing fails somewhere. (Commonly it fails everywhere.)

Beware that this notion of “closed one-form” is not equivalent to the notion of “closed set” (containing its limit points) nor to the notion of “closed manifold” (compact without boundary). See reference 8 and reference 9.

24.    We want to be able to integrate our differential forms. For a grady form, this is easy. We postulate that


 dF = F(B) − F(A)              (16)

The meaning is simple: the integral measures the number of contours that you cross in going from point A to point B. For a grady 1-form, this number is independent of the path you take along the way from A to B.

This integral is, of course, a linear operator.

25.    To integrate an ungrady 1-form, we need to specify the path. Let the general point on the path C be denoted C(θ), where θ is a parameter that varies smoothly and monotonically as we progress along the path. Recall that the general 1-form B can be written as a superposition:

B = fi(xdxi              (17)

We are using the Einstein summation convention, i.e. implied summation over repeated indices, such as index i in this equation.

As explained in section 6, the integral of this is:


along C

∂ Ci 
∂ θ

6  Integrating a One-Form

6.1  Explanation

To understand how we integrate a one-form B along the curve C, start by breaking the curve into small segments and integrating each segment separately:


along C

along C




and if f is a sufficiently smooth function and if C is a sufficiently smooth curve, and if the points {θ1, θ2, ⋯} are sufficiently close together, then we can treat f as being locally constant and pull it out front of the integrals:


along C

along C




Now we have grady forms inside the integral, so we can integrate them immediately using equation 16. We get


along C

along C
 = fi(C(θ1)) [Ci(θ2) − Ci(θ1)] 
 + fi(C(θ2)) [Ci(θ3) − Ci(θ2)] 
 + fi(C(θ3)) [Ci(θ4) − Ci(θ3)] 

where we have described the point C(θ) using an expansion in terms of the basis vectors:

C(θ) = Ci(θ) xi              (22)

Equation 21 is beginning to look like a familiar Riemann integral. In fact it is just


along C

along C

∂ Ci 
∂ θ

6.2  No Dot Product

In equation 23, do not think of the integrand as a dot product, even though it involves the same sum-of-products you would use for evaluating f · ∂C/∂θ. We do not have a dot product. The operation here is a contraction. A contraction involves a one-form acting on a pointy vector. In this case the one-form is f and the pointy vector is ∂C/∂θ. In equation 21, you can visualize [Ci(θ2) − Ci(θ1)] as a pointy vector with its tip at C(θ2) and its tail at C(θ1).

We can carry out the contraction of a one-form with a pointy vector. We cannot carry out the dot product of two one-forms, nor the dot product of two pointy vectors. Think of one-forms as 1×D matrices (one row and D columns) and pointy vectors as D×1 matrices.

6.3  Example

As an example, consider integrating the one-form

f := 
 dx1 + 
 dx2              (24)

where r := √(x12 + x22). This one-form is depicted, with fair accuracy, in figure 9. We wish to integrate it along a curve C which is a circular path of radius R, centered on the origin, so that along C:

x1 = R cos(θ)
  x2 = R sin(θ)
  ∂C1/∂θ = R sin(θ)
  ∂C2/∂θ =  R cos(θ)

Plugging in to equation 23 we find


along C

 R sin2(θ) + R cos2(θ)dθ 
  = R

7  References

John Denker,
“Visualizing A Field that is Not the Gradient of Any Potential”

John Denker,
“Psychrometric Charts, and the Evil of Axes”

John Denker
“The Laws of Thermodynamics”

Todd Rowland,
“Exterior Derivative” in Mathworld (Eric W. Weisstein, ed.),

Steven S. Gubser,
"Math for physicists: differential forms”,

John Denker,
“Two Types of Vector : Physics and/or Components”.

John Denker,
“Partial Derivatives – Pictorial Interpretation”

Mathworld entry: “Closed Set”

Mathworld entry: “Closed Manifold”

(beware: at some points this assumes the existence of a dot product.)


Copyright © 2003 jsd