*Preview**Introduction**Some Definitions**Deviation From Calorically Perfect Gas**Molecules Modes of Energy**Boson and Fermions**Boltzman distribution**Evaluate the Partition Function**Thermodynamic Properties**Equilibrium Constant**Qualitative Disscussion**Equilibrium Composition Calculation**Chemical Rate Equations**Elementary Reactions**References*

Explosions, combustion, the searing temperatures associated with the very high speed flow of a gas — these are some examples of compressible flow where high temperature effects must be taken into account. For these and other situations the assumption of a calorically perfect gas with constant specific heats, is simply not good enough. We have to take into account chemical reactions occurring in the flow, and other physical phenomena that cause the specific heats to be variable.

Consider the atmospheric entry of the Apollo command vehicle upon return from the moon. At an altitude of approximately 53 km. the velocity of the vehicle is 11 km/s. As sketched in the following figure, a strong bow shock wave is wrapped around the blunt nose, and the shock layer between the shock and the body is relatively thin. Moreover, at a standard altitude of 53 km, the air temperature is 283 K and the resulting speed of sound is 338 m/s; hence the Mach number of the Apollo vehicle is 32.5—an extremely large hypersonic value. Using the theory for a calorically perfect gas, let us estimate the temperature in this shock layer. From the tables, for M = 32.5, $T_S/T_\infty = 206$, where $T_s$ is the static temperature behind the normal portion of the bow shock wave. Hence, $T_s = (206)(283) = 58,300 K$. This is an extremely high temperature; it is also completely incorrect. Long before this temperature is reached, the air molecules will dissociate and ionize. Indeed, the shock layer becomes a partially ionized plasma, where the specific heat of the gas is a strong function of both pressure and temperature. The assumption of a calorically perfect gas made above is completely inaccurate; when this chemically reacting gas is properly calculated, the shock layer temperature is on the order of 11,600 K—still a high value, but a factor of 5 less than the temperature predicted on the basis of a calorically perfect gas.

Following figure compares the variation of shock layer temperature as a function of flight velocity for both the cases of a calorically perfect gas and an equilibrium chemically reacting gas. Also noted are typical reentry velocities for various space vehicles such as an intermediate range ballistic missile (IRBM), intercontinental ballistic missile (ICBM), earth orbital vehicles (e.g., Mercury and Gemini), lunar return vehicles (e.g., Apollo) and Mars return vehicles. Clearly, for all such cases, the assumption of a calorically perfect gas is not appropriate; the effects of chemical reactions must be taken into account.

For an equilibrium system of a real gas where intermolecular forces are important, and also for an equilibrium chemically reacting mixture of perfect gases, the internal energy is a function of both temperature and volume. Let $e$ denote the specific internal energy (internal energy per unit mass). Then, the enthalpy, $h$ is defined, per unit mass, as $h=e+pv$, and we have

$e=e(T,v)$

$h=h(T,p)$

for both a real gas and a chemically reacting mixture of perfect gases.

If the gas is not chemically reacting, and if we ignore intermolecular forces, the resulting system is a thermally perfect gas, where internal energy and enthalpy are functions of temperature only, and where the specific heats at constant volume and pressure, $c_v$ and $c_p$, are also functions of temperature only:

$e=e(T)$

$h=h(T)$

$de=c_v dT$

$dh=c_p dT$

The temperature variation of $c_v$ and $c_p$ is associated with the vibrational and electronic motion of the particles.

If the specific heats are constant, the system is a calorically perfect gas, where

$e=c_v T$

$h=c_p T$

in the last equations, it has been assumed that $h=e=0$ at $T=0$. In many compressible flow applications, the pressures and temperatures are moderate enough that the gas can be considered to be calorically perfect.

There are two major physical characteristics which cause a high-temperature gas

to deviate from calorically perfect gas behavior:

1. As the temperature of a diatomic or polyatomic gas is increased above standard conditions, the vibrational motion of the molecules will become important,absorbing some of the energy which otherwise would go into the translational and rotational molecular motion. As we shall soon see, the excitation of vibrational energy causes the specific heat to become a function of temperature, i.e., the gas gradually shifts from calorically perfect to thermally perfect.

2. As the gas temperature is further increased, the molecules will begin to dissociate (the atoms constituting the various molecules will break away from the molecular structure) and even ionize (electrons will break away from the atoms). Under these conditions, the gas becomes chemically reacting, and the specific heat becomes a function of both temperature and pressure. if we consider air at 1 atm, the approximate temperatures at which various reactions will become important are illustrated in the following figure. If the gas is at lower pressure, these temperatures shift downward;

Molecule is a collection of atoms bound together by a rather complex intramolecular force. A simple concept of a diatomic molecule (two atoms) is the “dumbbell” model sketched in the following. This molecule has several modes (forms) of energy:

1. It is moving through space, and hence it has translational energy $\epsilon_{trans}^'$, as sketched in the following figure. The source of this energy is the translational kinetic energy of the center of mass of the molecule. Since the molecular translational velocity can be resolved into three components (such as $V_x$, $V_y$, and $V_z$ in the xyz cartesian space shown in the following figure), the molecule is said to have three “geometric degrees of freedom” in translation. Since motion along each

coordinate direction contributes to the total kinetic energy, the molecule is also said to have three “thermal degrees of freedom.”

2. It is rotating about the three orthogonal axes in space, and hence it has rotational energy $\epsilon_{rot}^'$, as sketched in the following figure. The source of this energy is the rotational kinetic energy associated with the molecule’s rotational velocity and its moment of inertia. However, for the diatomic molecule shown in the following figure, the moment of inertia about the internuclear axis (the z axis) is very small, and therefore the rotational kinetic energy about the z axis is negligible in comparison to rotation about the x and y axis. Therefore, the diatomic molecule is said to have only two “geometric” as well as two “thermal” degrees of freedom. The same is true for a linear polyatomic molecule such as $CO_2$ shown in following figure. However, for nonlinear molecules, such as $H_2O$ also shown in following figure, the number of geometric (and thermal) degrees of freedom in rotation are three.

3. The atoms of the molecule are vibrating with respect to an equilibrium location within the molecule. For a diatomic molecule, this vibration is modeled by a spring connecting the two atoms, as illustrated in following figure. Hence the molecule has vibrational energy $\epsilon_{vib}^'$. There are two sources of this vibrational energy: the kinetic energy of the linear motion of the atoms as they vibrate back and forth, and the potential energy associated with the intramolecular force (symbolized by the spring). Hence, although the diatomic molecule has only one geometric degree of freedom (it vibrates only along one direction, namely, that of the internuclear axis), it has two thermal degrees of freedom due to the contribution of both kinetic and potential energy. For polyatomic molecules, the vibrational motion is more complex, and numerous fundamental vibrational modes can occur, with a consequent large number of degrees of freedom.

4. The electrons are in motion about the nucleus of each atom constituting the molecule, as sketched in following figure. Hence, the molecule has electronic energy $\epsilon_{el}^'$. There are two sources of electronic energy associated with each electron:

kinetic energy due to its translational motion throughout its orbit about the nucleus, and potential energy due to its location in the electromagnetic force field established principally by the nucleus. Since the overall electron motion is rather complex, the concepts of geometric and thermal degrees of freedom are usually not useful for describing electronic energy.

Therefore, we see that the total energy of a molecule, $\epsilon^'$, is the sum of its translational, rotational, vibrational, and electronic energies:

$\epsilon^'=\epsilon_{trans}^{'}+\epsilon_{rot}^{'}+\epsilon_{vib}^{'}+\epsilon_{el}^{'}$ (for molecules)

For a single atom, only the translational and electronic energies exist:

$\epsilon^'=\epsilon_{trans}^{'}+\epsilon_{el}^{'}$ (for atoms)

The results of quantum mechanics have shown that each of these energies is quantized, i.e., they can exist only at certain discrete values, as schematically shown in following figure. This is a dramatic result. Intuition, based on our personal observations of nature, would tell us that at least the translational and rotational energies could be any value chosen from a continuous range of values (i.e., the complete real number system). However, our daily experience deals with the macroscopic, not the microscopic world, and we should not always trust our intuition when extrapolated to the microscopic scale of molecules. A major benefit of quantum mechanics is that it correctly describes microscopic properties, some of which are contrary to intuition. In the case of molecular energy, all modes are quantized, even the translational mode. These quantized energy levels are symbolized by the ladder-type diagram shown in following figure, with the vertical height of each level as a measure of its energy. Taking the vibrational mode for example, the lowest possible vibrational energy is symbolized by $\epsilon_{O_{vib}}^'$. The next allowed quantized value is $\epsilon_{1_{vib}}^'$ then $\epsilon_{2_{vib}}^{'} , ... ,\epsilon_{i_{vib}}^{'}, ...$. The energy of the ith vibrational energy level is $\epsilon_{i_{vib}}^{'}$ and so forth. Note that, as illustrated in following figure, the spacing between the translational energy levels is very small, and if we were to look at this translational energy level diagram from across the room, it would look almost continuous. The spacings between rotational energy levels are much larger than between the translational energies; moreover, the spacing between two adjacent rotational levels increases as the energy increases (as we go up the ladder in following figure). The spacings between vibrational levels are much larger than between rotational levels; also, contrary to rotation, adjacent vibrational energy levels become more closely spaced as the energy increases. Finally, the spacings between electronic levels are considerably larger than between vibrational levels, and the difference between adjacent electronic levels decreases at higher electronic energies. The quantitative calculation of all these energies will be given later on.

Again examining following figure, note that the lowest allowable energies are denoted by $\epsilon_{O_{trans}}^'$, $\epsilon_{O_{rot}}^'$, $\epsilon_{O_{vib}}^'$ and $\epsilon_{O_{el}}^'$. These levels are defined as the ground state for the molecule. They correspond to the energy that the molecule would have if the gas were theoretically at a temperature of absolute zero; hence the values are also called the zero-point energies for the translational, rotational, vibrational, and electronic modes, respectively. It will be shown in Sec. 16.7 that the rotational zero-point energy is precisely zero, whereas the zero-point energies for translation, vibration, and electronic motion are not. This says that, if the gas were theoretically at absolute zero, the molecules would still have some finite translational motion (albeit very small) as well as some finite vibrational motion. Moreover, it only makes common sense that some electronic motion should theoretically exist at absolute zero, or otherwise the electrons would fall into the nucleus and the atom would collapse. Therefore, the total zero-point energy for a molecule is denoted by $\epsilon_{O}^'$, where

$\epsilon_{O}^{'}=\epsilon_{O_{trans}}^{'}+\epsilon_{O_{vib}}^{'}+\epsilon_{O_{el}}^{'}$

recalling that $\epsilon_{O_{rot}}^'$.

It is common to consider the energy of a molecule as measured above its zero¬point energy. That is, we can define the translational, rotational, vibrational, and electronic energies all measured above the zero-point energy as $\epsilon_{j_{trans}}$,$\epsilon_{k_{rot}}$, $\epsilon_{l_{vib}}$ and $\epsilon_{m_{el}}$, respectively, where

$\epsilon_{j_{trans}}=\epsilon_{j_{trans}}^{'}-\epsilon_{O_{trans}}^{'}$

$\epsilon_{k_{rot}}=\epsilon_{k_{rot}}^{'}$

$\epsilon_{l_{vib}}=\epsilon_{l_{vib}}^{'}-\epsilon_{O_{vib}}^{'}$

$\epsilon_{m_{el}}=\epsilon_{m_{el}}^{'}-\epsilon_{O_{el}}^{'}$

(Note that the unprimed values denote energy measured above the zero-point value.)

In light of this, we can write the total energy of a molecule as $\epsilon_{i}^'$, where

$\epsilon_{i}^{'}=\underbrace{\epsilon_{j_{trans}}+\epsilon_{k_{rot}}+\epsilon_{l_{vib}}+\epsilon_{m_{el}}}_{\textrm{All are measured above the zero point energy, thus all are equal to zero at T=0 K}}+\underbrace{\epsilon_{O}^{'}}_{\textrm{zero-point energy}}$

For an atom, the total energy can be written as

$\epsilon_{i}^{'}=\epsilon_{j_{trans}}+\epsilon_{m_{el}}+\epsilon_{O}^{'}$

if we examine a single molecule at some given instant in time, we would see that it simultaneously has a zero-point energy $\epsilon_{O}^{'}$ (a fixed value for a given molecular species), a quantized electronic energy measured above the zero-point, $\epsilon_{m_{el}}$ , a quantized vibrational energy measured above the zero point, $\epsilon_{l_{vib}}$ and so forth for rotation and translation. The total energy of the molecule at this given instant is $\epsilon_{i}^{'}$. Since $\epsilon_{i}^{'}$ is the sum of individually quantized energy levels, then $\epsilon_{i}^{'}$ itself is quantized. Hence, the allowable total energies can be given on a single energy level diagram, where $\epsilon_{O}^{'},\epsilon_{1}^{'},\epsilon_{2}^{'},...,\epsilon_{i}^{'},...$ are the quantized values of the total energy of the molecule.

In the above paragraphs, we have gone to some length to define and explain the significance of molecular energy levels. In addition to the concept of an energy level, we now introduce the idea of an energy state. For example, quantum mechanics identifies molecules not only with regard to their energies, but also with regard to angular momentum. Angular momentum is a vector quantity, and therefore has an associated direction. For example, consider the rotating molecule shown in following figure. Three different orientations of the angular momentum vector are shown; in each orientation, assume the energy of the molecule is the same. Quantum mechanics shows that molecular orientation is also quantized, i.e., it can point only in certain directions. In all three cases shown in following figure, the rotational energy is the same, but the rotational momentum has different directions. Quantum mechanics sees these cases as different and distinguishable states. Different states associated with the same energy level can also be defined for electron angular momentum, electron and nuclear spin, and the rather arbitrary lumping together of a number of closely spaced translational levels into one approximate “level” with many “states.”

In summary, we see that, for any given energy level $\epsilon_{i}^{'}$ , there can be a number of different states that all have the same energy. This number of states is called the degeneracy or statistical weight of the given level $\epsilon_{i}^{'}$, and is denoted by $g_i$. This concept is exemplified in following figure,which shows energy levels in the vertical direction, with the corresponding states as individual horizontal lines arrayed to the right at the proper energy value. For example, the second energy level is shown with five states, all with an energy value equal to $\epsilon_{2}^{'}$; hence, $g_2$. The values of $g_i$ for a given molecule are obtained from quantum theory and/or spectroscopic measurements.

Now consider a system consisting of a fixed number of molecules, N. Let $N_j$ be the number of molecules in a given energy level $\epsilon_{j}^{'}$. This value $N_j$ is defined as the population of the energy level. Obviously,

$N=\sum_{j} N_j$

where the summation is taken over all energy levels. The different values of $N_j$ associated with the different energy levels $\epsilon_{j}^{'}$ form a set of numbers which is defined as the population distribution. If we look at our system of molecules at one instant in time, we will see a given set of $N_j$'s, i.e., a certain population distribution over the energy levels. Another term for this set of numbers, synonymous with population distribution, is macrostate. Due to molecular collisions, some molecules will change from one energy level to another. Hence, when we look at our system at some later instant in time, there may be a different set of $N_j$'s, and hence a different population distribution, or macrostate. Finally, let us denote the total energy of the system as E, where

$E=\sum_{j} \epsilon_{j}^{'}N_j$

The schematic in following figure, reinforces the above definitions. For a system of N molecules and energy E, we have a series of quantized energy levels $\epsilon_{O}^{'},\epsilon_{1}^{'},\epsilon_{2}^{'},...,\epsilon_{j}^{'},...$, with corresponding statistical weights $g_{O}^{'},g_{1}^{'},g_{2}^{'},...,g_{j}^{'},...$. At some given instant, the molecules are distributed over the energy levels in a distinct way, $N_{O}^{'},N_{1}^{'},N_{2}^{'},...,N_{j}^{'},...$ constituting a distinct macrostate. In the next instant, due to molecular collisions, the populations of some levels may change, creating a different set of $N_j$'s, and hence a different macrostate.

Over a period of time, one particular macrostate, i.e., one specific set of $N_j$'s, will occur much more frequently than any other. This particular macrostate is called the most probable macrostate (or most probable distribution). It is the macrostate which occurs when the system is in thermodynamic equilibrium. in fact this is the definition of thermodynamic equilibrium within the framework of statistical mechanics. The central problem of statistical thermodynamics, and the one to which we will now address ourselves, is as follows:

Given a system with a fixed number of identical particles, $N=\sum_{j} N_j$, and fixed energy $E=\sum_{j} \epsilon_{j}^{'}N_j$, find the most probable macrostate.

In order to solve the above problem, we need one additional definition, namely, that of a microstate. Consider the schematic shown in following figure, which illustrates a given macrostate (for purposes of illustration, we choose $N_0=2$, $N_1=5$, $N_2=3$, etc.). Here, we display each statistical weight for each energy level as a vertical array of boxes. For example, under $\epsilon_{1}^{'}$ we have $g_1=6$, and hence six boxes, one for each different energy state with the same energy $\epsilon_{1}^{'}$. In the energy level $\epsilon_{1}^{'}$ , we have five molecules ($N_1=5$). At some instant in time, these five molecules individually occupy the top three and lower two boxes under $g_1$, with the fourth box left vacant (i.e., no molecules at that instant have the energy state represented by the fourth box). The way that the molecules are distributed over the available boxes defines a microstate of the system, say microstate I as shown in following figure. At some later instant, the $N_1=5$ molecules may be distributed differently over the $g_1=6$ states, say leaving the second box vacant. This represents another, different microstate, labeled microstate II in following figure. Shifts over the other vertical arrays of boxes between microstates I and II are also shown in following figure. However, in both cases, $N_o$ still equals 2, $N_1$still equals 5, etc.—i.e., the macrostate is still the same. Thus, any one macrostate can have a number of different microstates, depending on which of the degenerate states (the boxes in following figure) are occupied by the molecules. In any given system of molecules, the microstates are constantly changing due to molecular collisions. Indeed, it is a central assumption of statistical thermodynamics that each microstate of a system occurs with equal probability. Therefore, it is easy to reason that the most probable macrostate is that macrostate which has the maximum number of microstates If each microstate appears in the system with equal probability and there is one particular macrostate that has considerably more microstates than any other, then that is the macrostate we will see in the system most of the time. This is indeed the situation in most real thermodynamic systems.

Following figure is a schematic which plots the number of microstates in different macrostates. Note there is one particular macrostate, namely, macrostate D, that stands out as having by far the largest number of microstates. This is the most probable macrostate; this is the macrostate that is usually seen, and constitutes the situation of thermodynamic equilibrium in the system. Therefore, if we can count the number of microstates in any given macrostate, we can easily identify the most probable macrostate.

Molecules and atoms are constituted from elementary particles—electrons, protons, and neutrons. Quantum mechanics makes a distinction between two different classes of molecules and atoms, depending on their number of elementary particles, as follows:

1. Molecules and atoms with an even number of elementary particles obey a certain statistical distribution called Bose—Einstein statistics. Let us call such molecules or atoms Bosons.

2. Molecules and atoms with an odd number of elementary particles obey a different statistical distribution called Fermi—Dirac statistics. Let us call such molecules or atoms Fermions.

There is an important distinction between the above two classes, as follows:

1. For Bosons, the number of molecules that can be in any one degenerate state is unlimited (except, of course, that it

must be less than or equal to $N_j$).

2. For Fermions, only one molecule may be in any given degenerate state at any instant.

This distinction has a major impact on the counting of microstates in a gas.

First, let us consider Bose—Einstein statistics:

$W=\prod_{j}{\frac{(N_j+g_j-1)!}{(g_j-1)!N_j!}}$

W denote the total number of microstates for a given macrostate.

For the fermions we will have

$W=\prod_{j}{\frac{g_j!}{(g_j-1)!N_j!}}$

At very low temperature, say less than 5 K, the molecules of the system are jammed together at or near the ground energy levels, and therefore the degenerate states of these low-lying levels are highly populated. As a result, the differences between Bose—Einstein statistics and Fermi-Dirac statistics are important. In contrast, at higher temperatures, the molecules are distributed over many energy levels, and therefore the states are generally sparsely populated. Hence, the high temperature case is a limiting case. This limiting case is called the Boltzmann limit or Boltzmann distribution. Since all gas dynamic problems generally deal with temperatures far above 5 K, the Boltzmann distribution is appropriate for all our future considerations.

As we derived in the class, the Boltzmann distribution is

$N_{j}^{*}=N\frac{g_{j}e^{\frac{-\epsilon_{j}}{kT}}}{Q}$

$N_{j}^{*}$ corresponds to the most probable distribution of particles over the energy levels $\epsilon_{j}^{'}$. $Q$ is called partition function and it is defined as:

$Q=\sum_{j}{g_{j}e^{\frac{-\epsilon_{j}}{kT}}}$

The partition function is a very useful quantity in statistical thermodynamics, as we will soon appreciate. Moreover it is a function of the volume as well as the temperature of the system as will be demonstrated later

Q=f(T,V)

In summary, the Boltzmann distribution, is extremely important. It should be interpreted as follows. For molecules or atoms of a given species, quantum mechanics says that a set of well-defined energy levels $\epsilon_{j}^{'}$ exists, over which the molecules or atoms can be distributed at any given instant, and that each energy level has a certain number of degenerate states, $g_j$. For a system of N molecules or atoms at a given T and V, partition function tells us how many such molecules or atoms, $N_{j}^{*}$, are in each energy level $\epsilon_{j}^{'}$ when the system is in thermodynamic equilibrium.

As we derived in the class, the thermodynamic properties can be derived from partition function. Here are some of the quantities that we can obtain from partition function:

$e=RT^{2}(\frac{\partial{ln Q}}{\partial T})_V$

$h=RT+RT^{2}(\frac{\partial{ln Q}}{\partial T})_V$

$S=Nk(ln(\frac{Q}{N})+1)+NkT(\frac{\partial{ln Q}}{\partial T})_V$

$p=NkT(\frac{\partial{ln Q}}{\partial T})_V$

In all of these equations, Q is the key factor. If Q can be evaluated as a function of V and T, the thermodynamic state variables can then be calculated.

Since the partition function is defined as

$Q=\sum_{j}{g_{j}e^{\frac{-\epsilon_{j}}{kT}}}$

we need expressions for the energy levels $\epsilon_{j}$ in order to further evaluate Q. The quantized levels for translational, rotational, vibrational, and electronic energies are given by quantum mechanics. We state these results without proof here; see the classic books by Herzberg for details.

Recall that the total energy of a molecule is

$\epsilon^{'}=\epsilon_{trans}^{'}+\epsilon_{rot}^{'}+\epsilon_{vib}^{'}+\epsilon_{el}^{'}$

In this equation, from quantum mechanics,

$\epsilon_{trans}^{'}=\frac{h^2}{8m}(\frac{n_{1}^{2}}{a_{1}^{2}}+\frac{n_{2}^{2}}{a_{2}^{2}}+\frac{n_{3}^{2}}{a_{3}^{2}})$

where $n_1$, $n_2$, $n_3$ are quantum numbers that can take the integral values 1, 2, 3, etc., and $a_1$, $a_2$, $a_3$ are linear dimensions which describe the size of the system. The values of $a_1$, $a_2$, and $a_3$ can be thought of as the lengths of three sides of a rectangular box. (Also note here that h denotes Planck’s constant, not enthalpy as before. In order to preserve standard nomenclature in both gas dynamics and quantum mechanics, we will live with this duplication. It will be clear which quantity is being used in our future expressions.) Also,

$\epsilon_{rot}^{'}=\frac{h^2}{8\pi^{2}I}J(J+1)$

where $J$ is the rotational quantum number, J = 0, 1, 2, etc., and I is the moment of

inertia of the molecule. For vibration,

$\epsilon_{vib}^{'}=hv(n+\frac{1}{2})$

where n is the vibrational quantum number, n = 0, 1, 2, etc., and v is the fundamental vibrational frequency of the molecule. For the electronic energy, no simple expression can be written, and hence it will continue to be expressed simply as $\epsilon_{el}^{'}$.

In these expressions, I and v for a given molecule are usually obtained from spectroscopic measurements; Also note that $\epsilon_{trans}^{'}$ depends on the size of the system through $a_1$, $a_2$, and $a_3$, whereas $\epsilon_{rot}^{'}$, $\epsilon_{vib}^{'}$ and $\epsilon_{el}^{'}$ do not. Because of this spatial dependence of $\epsilon_{trans}^{'}$, Q depends on V as well as T. Finally, note that the lowest quantum number defines the zero-point energy for each mode, and from the above expressions, the zero-point energy for rotation is precisely zero, whereas it is a finite value for the other modes. For example,

$\epsilon_{trans_O}^{'}=\frac{h^2}{8m}(\frac{1}{a_{1}^{2}}+\frac{1}{a_{2}^{2}}+\frac{1}{a_{3}^{2}})$

$\epsilon_{rot_O}^{'}=0$

$\epsilon_{vib_O}^{'}=\frac{1}{2}hv$

In these equations, $\epsilon_{trans_O}^{'}$ is very small, but it is finite. In contrast, $\epsilon_{vib_O}^{'}$ is a larger

finite value and $\epsilon_{el_O}^{'}$, although we do not have an expression for it, is larger yet.

Let us now consider the energy measured above the zero point:

$\epsilon_{trans}=\epsilon_{trans}^{'}-\epsilon_{trans_O}\approx\frac{h^2}{8m}(\frac{n_{1}^{2}}{a_{1}^{2}}+\frac{n_{2}^{2}}{a_{2}^{2}}+\frac{n_{3}^{2}}{a_{3}^{2}})$

(Here, we are neglecting the small but finite value of $\epsilon_{trans_O}$.)

$\epsilon_{rot}=\epsilon_{rot}^{'}-\epsilon_{rot_O}=\frac{h^2}{8\pi^{2}I}J(J+1)$

$\epsilon_{vib}=\epsilon_{vib}^{'}-\epsilon_{vib_O}=nhv$

$\epsilon_{el}=\epsilon_{el}^{'}-\epsilon_{el_O}$

Therefore, the total energy is

$\epsilon=\epsilon_{trans}+\epsilon_{rot}+\epsilon_{vib}+\epsilon_{el}+\epsilon_O$

Now, let us consider the total energy measured above the zero point, $\epsilon$, where

$\epsilon=\epsilon^{'}-\epsilon_{O}=\epsilon_{trans}+\epsilon_{rot}+\epsilon_{vib}+\epsilon_{el}$

Recall that Q is defined in terms of the sensible energy, i.e., the energy measured above zero point:

$Q=\sum_{j}{g_{j}e^{\frac{-\epsilon_j}{kT}}}$

where $\epsilon_j=\epsilon_{i_{trans}}+\epsilon_{J_{rot}}+\epsilon_{n_{vib}}+\epsilon_{l_{el}}$

Hence

$Q=[\sum_{i}{g_{i}exp(\frac{-\epsilon_{i_{trans}}}{kT})}][\sum_{J}{g_{J}exp(\frac{-\epsilon_{J_{rot}}}{kT})}][\sum_{n}{g_{n}exp(\frac{-\epsilon_{n_{vib}}}{kT})}][\sum_{l}{g_{l}exp(\frac{-\epsilon_{l_{el}}}{kT})}]$

Note that the sums in each of the parentheses are partition functions for each mode of energy. Thus, this equation can be written as

$Q=Q_{trans}Q_{rot}Q_{vib}Q_{el}$

As we derived in the class, we will have

$Q_{trans}=(\frac{2\pi mkT}{h^2})^{3/2}V$

$Q_{rot}=\frac{8\pi^{2}IkT}{h^2}$

$Q_{vib}=\frac{1}{1-e^{\frac{-hv}{kT}}}$

To evaluate the electronic partition function, no closed-form expression analogous to the above results is possible. Rather, the definition is used, namely

$Q_{el}=\sum_{l=0}^{\infty}g_{l}e^{\frac{-\epsilon_{l}}{kT}}=g_{O}+g_{1}e^\frac{-\epsilon_{1}}{kT}+g_{2}e^\frac{-\epsilon_{2}}{kT}+...$

where spectroscopic data for the electronic energy levels $\epsilon_1$, $\epsilon_2$, etc., are inserted directly in the above terms. Usually, $\epsilon_l$for the higher electronic energy levels is so large that terms beyond the first three shown in the above equation can be neglected for T < 15,000 K.

We arrive at the evaluation of the high-temperature thermodynamic properties of a single-species gas. We will emphasize the specific internal energy e; other properties are obtained in an analogous manner.

First, consider the translational energy. As we already mentioned:

$e=RT^{2}(\frac{\partial{ln Q}}{\partial T})_V$

therefore, we are going to have:

$e_{trans}=\frac{3}{2}RT$

$e_{rot}=RT$

$e_{vib}=\frac{\frac{hv}{kT}}{e^{\frac{hv}{kT}}-1}RT$

Let us examine these results in light of a classical theorem from kinetic theory, the “theorem of equipartition of energy.” Established before the turn of the century, this theorem states that each thermal degree of freedom of the molecule contributes $\frac{1}{2}kT$ to the energy of each molecule, or $\frac{1}{2}RT$ to the energy per unit mass of gas. For example, we demonstrated that the translational motion of a molecule or atom contributes three thermal degrees of freedom; hence, due to equipartition of energy, the translational energy per unit mass should be $3\frac{1}{2}RT=\frac{3}{2}RT$. This is precisely the result we obtainedfrom the modem principles of statistical therpmodynamics. Similarly, for a diatomic molecule, the rotational motion contributes two thermal degrees of freedom; therefore, classically,$2\frac{1}{2}RT=RT$ , which is in precise agreement with what we obtained.

At this stage, you might be wondering why we have gone to all the trouble of the preceding sections if the principal of equipartition of energy will give us the results so simply. Indeed, extending this idea to the vibrational motion of a diatomic molecule, we recognize that the two vibrational thermal degrees of freedom should result in $\epsilon_{vib}=2\frac{1}{2}RT=RT$. However, this is not confirmed with what we obtained. Indeed, the factor $\frac{\frac{hv}{kT}}{e^{\frac{hv}{kT}}-1}$ is less than unity except when $T->\infty$, when it approaches unity; thus, in general, $e_{vib}<RT$, in conflict with classical theory. This conflict was recognized by scientists at the turn of the century, but it required the development of quantum mechanics in the l920s to resolve the problem. Classical results are based on our macroscopic observations of the physical world, and they do not necessarily describe phenomena in the microscopic world of molecules. This is a major distinction between classical and quantum mechanics. As a result, the equipartition of energy principal is misleading. Instead, what we obtained from quantum considerations, is the proper expression for vibrational energy.

In summary, we have for atoms:

$e=\frac{3}{2}RT+e_{el}$

and for molecules:

$e=\frac{3}{2}RT+RT+\frac{\frac{hv}{kT}}{e^{\frac{hv}{kT}}-1}RT+e_{el}$

In addition, recalling the specific heat at constant volume $c_{v}=(\frac{\partial e}{\partial T})_V$, for atoms will be

$c_{v}=\frac{3}{2}R+\frac{\partial e_{el}}{\partial T}$

and for molecules will be

$c_{v}=\frac{3}{2}R+R+\frac{(\frac{hv}{kT})^{2}e^{\frac{hv}{kT}}}{(e^{\frac{hv}{kT}}-1)^2}R+\frac{\partial e_{el}}{\partial T}$

In light of the above results, we are led to the following important conclusions:

1. We note that both $e$ and $c_v$, are functions of T only. This is the case for a thermally perfect, non-reacting gas, as defined in

$e=f_{1}(T)$ and $c_{v}=f_{2}(T)$

This result, obtained from statistical thermodynamics, is a consequence of our assumption that the molecules are independent (no intermolecular forces) during the counting of microstates, and that each microstate occurs with equal probability. If we included intermolecular forces, such would not be the case.

2. For a gas with only translational and rotational energy, we have

$c_{v}=\frac{3}{2}R$ (for atoms)

$c_{v}=\frac{5}{2}R$ (for diatomic molecules)

That is, $c_v$ is constant. This is the case of a calorically perfect gas. For air at or around room temperature, $c_{v}=\frac{5}{2}R$,$c_{p} = c_{v} + R = \frac{7}{2}R$, and hence $\gamma = c_{p}/c_v} = 1.4 = const$. So we see that air under normal conditions has translational and rotational energy, but no significant vibrational energy, and that the results of statistical thermodynamics predict $\gamma=1.4 = const$. However, when the air temperature reaches 600 K or higher, vibrational energy is no longer negligible. Under these conditions, we say that “vibration is excited”; consequently $c_v= f(T)$ , and $\gamma$ is no longer constant. For air at such temperatures, the “constant $\gamma$” is no longer strictly valid. Instead, we have to redevelop our gas dynamics using results for a thermally perfect gas such.

In the theoretical limit of $T -> \infty$, the equation that we derived, predicts $c_{v}=\frac{7}{2}R$, and again we would expect $c_v$ to be a constant. However, long before this would occur, the gas would dissociate and ionize due to the high temperature, and $c_v$, would vary due to chemical reactions.

4. Note that equations that we derived, give the internal energy measured above the zero point. Indeed, statistical thermodynamics can only calculate the sensible energy or enthalpy; an absolute calculation of the total energy is not possible because we cannot in general calculate values for the zero-point energy. The zero-point energy remains a useful theoretical concept especially for chemically reacting gases, but not one for which we can obtain an absolute numerical value.

5. The theoretical variation of $c_v$for air as a function of temperature is sketched in following figure. This sketch is qualitative only, and is intended to show that, at very low temperatures (below 1 K), only translation is fully excited, and hence $c_{v}=\frac{3}{2}R$. (We are assuming here that the gas does not liquefy at low temperatures.) Between 1 K and 3 K, rotation comes into play, and above 3 K rotation and translation are fully excited, where $c_{v}=\frac{5}{2}R$. Then, above 600 K, vibration comes into play, and $c_v$ is a variable until approximately 2000 K. Above that temperature, chemical reactions begin to occur, and $c_v$, experiences large variations, as will be discussed later. The shaded region in following figure illustrates the regime where all our previous gas dynamic results assuming a calorically perfect gas are valid. The purpose of remaining discussion, will be exploring the high-temperature regime where $\gamma$ is no longer constant, and where vibrational and chemical reaction effects become important.

The theory and results obtained up to here apply to a single chemical species. However, most high-temperature gases of interest are mixtures of several species. Let us now consider the statistical thermodynamics of a mixture of gases; the results obtained in this section represent an important ingredient for our subsequent discussions on equilibrium chemically reacting gases.

First, consider a gas mixture composed of three arbitrary chemical species A, B,and AB. The chemical equation governing a reaction between these species is

$AB \rightleftharpoons A+B$

Assume that the mixture is confined in a given volume at a given constantpressure and temperature. We assume that the system has existed long enough for the composition to become fixed, i.e., the above reaction is taking place an equal number of times to both the right and left (the forward and reverse reactions are balanced). This is the case of chemical equilibrium. Therefore, let $N^{AB}$, $N^A$, and $N^B$ be the number of AB, A, and B particles, respectively, in the mixture at chemical equilibrium. Moreover, the A, B, and AB particles each have their own set of energy levels, populations, and degeneracies:

A schematic of the energy levels is given in the following figure. Recall that, in most cases, we do not know the absolute values of the zero-point energies, but in general we know that $\epsilon_{O}^{'A}\not=\epsilon_{O}^{'B}\not=\epsilon_{O}^{'AB}$.Therefore, the three energy-level ladders shown in the follwoing figure are at different heights.

However, it is possible to find the change in zero-point energy for the reaction

$\underbrace{AB}_{Reactant} \rightleftharpoons \underbrace{A+B}_{Products}$

$[\textrm{Zeropoint energy change}]\equiv[\textrm{Products Zeropoint energy}]-[\textrm{Reactants Zeropoint energy}]$

$\Delta\epsilon_O=(\epsilon_{O}^{'A}+\epsilon_{O}^{'B})-\epsilon_{O}^{'AB}$

This relationship is illustrated in the following figure

The equilibrium mixture of A, B, and AB particles has two constraints:

1-the total energy E is constant:

$E^A=\sum_{j}N_{j}^{A}\epsilon_{j}^{'A}=\sum_{j}N_{j}^{A}(\epsilon_{j}^{A}+\epsilon_{O}^{A})$

$E^B=\sum_{j}N_{j}^{B}\epsilon_{j}^{'B}=\sum_{j}N_{j}^{B}(\epsilon_{j}^{B}+\epsilon_{O}^{B})$

$E^AB=\sum_{j}N_{j}^{AB}\epsilon_{j}^{'AB}=\sum_{j}N_{j}^{AB}(\epsilon_{j}^{AB}+\epsilon_{O}^{AB})$

$E=E^{A}+E^{B}+E^{AB}=const$

2-Total number of A particles, $N_A$, both free and combined (such as in AB), must be constant. This is essentially the same as saying that the total number of A nuclei stays the same, whether it is in the form of pure A or combined in AB. We are not considering nuclear reactions here—only chemical reactions which rearrange the electron structure. Similarly, the total number of B particles, $N_B$, both free and combined must also be constant:

$\sum_{j}N_{j}^{A}+\sum_{j}N_{j}^{AB}=N_{A}=const$

$\sum_{j}N_{j}^{B}+\sum_{j}N_{j}^{AB}=N_{B}=const$

To obtain the properties of the system in chemical equilibrium, we must find the most probable macrostate of the system, much the same way as we proceeded before for a single species. The theme is the same; only the details are different. From this statistical thermodynamic treatment of the mixture, we find

$N_{j}^{A}=N^A\frac{g_{j}^{A}e^\frac{-\epsilon_{j}^A}{kT}}{Q^A}$

$N_{j}^{B}=N^B\frac{g_{j}^{B}e^\frac{-\epsilon_{j}^A}{kT}}{Q^B}$

$N_{j}^{AB}=N^{AB}\frac{g_{j}^{AB}e^\frac{-\epsilon_{j}^{AB}}{kT}}{Q^{AB}}$

Recall that $N^A$, $N^B$, and $N^{AB}$ are the actual number of A, B, and AB particles present in the mixture; do not confuse these with $N_A$ and $N_B$, which were defined as the number of A and B nuclei. The last three equations demonstrate that a Boltzmann distribution exists independently for each one of the three chemical species. Also we can derive the following equation.

$\frac{N^{A}N^{B}}{N^{AB}}=e^\frac{-\Delta\epsilon_O}{kT}\frac{Q^{A}Q^{B}}{Q^{AB}}$

The last equation gives some information on the relative amounts of A, B, and AB in the mixture. It is called the law of mass action, and it relates the amounts of different species to the change in zero-point energy, $\Delta\epsilon_O$ and to the ratio of partition functions for each species.

For gas dynamic calculations, there is a more useful form of the last equation, as follows. we can write the perfect gas equation of state for the mixture as

$pV=NkT$

For each species i, the partial pressure $p_i$, can be written as

$p_{i}V=N_{i}kT$

The partial pressure is defined by the last equation. it is the pressure that would exist if $N_i$particles of species i were the only matter filling the volume V. Letting $N_i$ equal $N^A$, $N^B$, and $N^{AB}$, respectively, and defining the corresponding partial pressures $P_A$,$P_B$ and $P_{AB}$, after simplification, we will have

$\frac{p_{A}p_{B}}{p_{AB}}=f(T)$

This function of temperature is defined as the equilibrium constant for the reaction

$AB \rightleftharpoons A+B$, $K_p(T)$: $\frac{p_{A}p_{B}}{p_{AB}}=K_p(T)$

From the last equation, the equilibrium constant for the reaction $AB \rightleftharpoons A+B$ can be defined as the ratio of the partial pressures of the products of reaction to the partial pressures of the reactants.

Generalizing this idea, consider the general chemical equation

$v_{1}A_{1}+v_{2}A_{2}+v_{3}A_{3} \rightleftharpoons v_{4}A_{4}+v_{5}A_{5}$

The corresponding equilibrium constant will be

$K_p(T)=\frac{(p_{A4})^{v_4}(p_{A5})^{v_5}}{(p_{A1})^{v_1}(p_{A2})^{v_2}(p_{A3})^{v_3}}$

the last equation is another form of the law of mass action, and it is extremely useful in the calculation of the composition of an equilibrium chemically reacting mixture.

In summary we have made three important accomplishments in this section

1.We have defined the equilibrium constant.

2.We have shown it to be a function of temperature only.

3.We have demonstrated a formula from which it may be calculated based on a knowledge of the partition functions. Indeed, tables of equilibrium constants for many basic chemical reactions have been calculated.

Consider air at normal room temperature and pressure. The chemical composition under these conditions is approximately 79% $N_2$, 20% $0_2$, and 1 percent trace species such as Ar, He, $CO_2$, $H_{2}O$, etc., by volume. Ignoring these trace species, we can consider that normal air consists of two species, $N_2$ and $O_2$. However, if we heat this air

to a high temperature, where 2500 K < T <9000 K, chemical reactions will occur among the nitrogen and oxygen. Some of the important reactions in this temperature range are

$O_2 \rightleftharpoons 2O$

$N_2 \rightleftharpoons 2N$

$N+O \rightleftharpoons 2NO$

$N+O \rightleftharpoons NO^{+}+e^{-}$

That is, at high temperatures, we have present in the air mixture not only $O_2$ and $N_2$, but O, N, NO, $NO^+$ and $e^-$ as well. Moreover, if the air is brought to a given T and p, and then left for a period of time until the above reactions are occurring an equal amount in both the forward and reverse directions, we approach the condition of chemical equilibrium. For air in chemical equilibrium at a given p and T, the species $O2$, O, $N_2$, N, NO, $NO^+$, and $e^-$ are present in specific, fixed amounts, which are unique functions of p and T. Indeed, for any equilibrium chemically reacting gas, the chemical composition (the types and amounts of each species) is determined uniquely by p and T.

The method discussed in this section is applicable to any equilibrium chemically reacting mixture. However, because a large number of high-speed, compressible flow problems deal with air, we will illustrate the method by treating the case of high-temperature air.

To begin with, there are several different ways of specifying the composition of a gas mixture. For example, the quantity of different gases in a mixture can be specified by means of

1.The partial pressures $p_i$. For air, we have $p_{O_{2}}$, $p_{O}$, $P_{N_2}$, $P_N$, $P_{NO}$, $P_{NO^+}$,and $p_{e^-}$.

2.The concentrations, i.e., the number of moles of species i per unit volume of the mixture, denoted by $[X_i]$. For air, we have $[O_2]$, $[O]$, $[N_2]$, etc.

3.The mole-mass ratios, i.e., the number of moles of i per unit mass of mixture, denoted by $\eta_i$. For air, we have $\eta_{O_2}$,$\eta_{N_2}$,etc.

4.The mole fractions, i.e., the number of moles of species i per unit mole of mixture, denoted by $X_i$. For air, we have $X_{O_2}$, $X_O$, $X_{N_2}$, etc.

5.The mass fraction, i.e., the mass of species i per unit mass of mixture, denoted by $c_i$. For air, we have $c_{O_2}$, $c_O$, $c_{N_2}$, etc.

Each of these is equally definitive for specifying the composition of a chemically reacting mixture—if we know the composition in terms of $p_i$, for example, then we can immediately convert to $X_i$, $c_i$, etc. However, for gas dynamic problems, the use of partial pressures is particularly convenient; therefore, the following development will deal with $p_i$.

Consider again a system of high-temperature air at a given T and $p_i$ and assume that the above seven species are present. We want to solve for $P_{O_2}$,$P_O$, $P_{N_2}$,$P_N$, $P_{NO}$, $P_{NO}$, and $P_{e^-}$ at the given mixture temperature and pressure. We have seven unknowns, hence we need seven independent equations. The first equation is Dalton’s law of partial pressures, which states that the total pressure of the mixture is the sum of the partial pressures (Dalton’s law holds only for perfect gases, i.e., gases wherein intermolecular forces are negligible):

$p=p_{O_2}+p_{O}+p_{N_2}+p_{N}+p_{NO}+p_{NO^+}+p_{e^-}$

In addition, we can define the equilibrium constant for each of the chemical reactions:

$O_2 \rightleftharpoons 2O$ , $\frac{(p_O)^2}{p_{O_2}}=K_{p,O_2}(T)$

$N_2 \rightleftharpoons 2N$ , $\frac{(p_N)^2}{p_{N_2}}=K_{p,N_2}(T)$

$N+O \rightleftharpoons 2NO$ , $\frac{p_{NO}}{p_{N}p_{O}}=K_{p,NO}(T)$

$N+O \rightleftharpoons NO^{+}+e^{-}$ , $\frac{p_{NO^+}p_{e^-}}{p_{N}p_{O}}=K_{p,NO^+}(T)$

In the last equations, the equilibrium constants $K_p$ are known values, calculated from statistical mechanics as previously described, or obtained from thermodynamic measurements. They can be found in established tables, such as the JANAF Tables . The other equations that we can have, coming from the indestructibility of matter, as follows.

Fact. The number of O nuclei, both in the free and combined state, must remain constant. Let $N_O$ denote the number of oxygen nuclei per unit mass of mixture.

Fact. The number of N nuclei, both in the free and combined state, must remain constant. Let $N_N$ denote the number of nitrogen nuclei per unit mass of mixture. Then, from the definition of Avogadro’s number $N_A$, and the mole-mass ratios $\eta_i$,

$N_A(2\eta_{O_2}+\eta_{O}+\eta_{NO}+\eta_{NO^+})=N_O$

$N_A(2\eta_{N_2}+\eta_{N}+\eta_{NO}+\eta_{NO^+})=N_N$

However, we have $p_{i}v=\eta_{i}RT$. Therefore $\eta_{i}=p_{i}\frac{v}{RT}$.

Dividing the last equations, we will have

$\frac{2p_{O_2}+p_{O}+p_{NO}+p_{NO^+}}{2p_{N_2}+p_{N}+p_{NO}+p_{NO^+}}=\frac{N_O}{N_N}$

The last equation is called the mass-balance equation. Here, the ratio $N_O/N_N$ is known from the original mixture at low temperature. For example, assuming at normal conditions that air consists of 80% $N_2$ and 20% $O_2$,

$\frac{N_O}{N_N}=\frac{0.2}{0.8}=0.25$

Finally, to obtain our last remaining equation, we state the fact that electric charge must be conserved, and hence

$\eta_{NO^+}=\eta_{e^-}$

Therefore

$p_{NO^+}=p_{e^-}$

In summary, we have the following seven nonlinear simultaneous, algebraic equations that can be solved for the seven unknown partial pressures.

$p=p_{O_2}+p_{O}+p_{N_2}+p_{N}+p_{NO}+p_{NO^+}+p_{e^-}$

$\frac{(p_O)^2}{p_{O_2}}=K_{p,O_2}(T)$

$\frac{(p_N)^2}{p_{N_2}}=K_{p,N_2}(T)$

$\frac{p_{NO}}{p_{N}p_{O}}=K_{p,NO}(T)$

$\frac{p_{NO^+}p_{e^-}}{p_{N}p_{O}}=K_{p,NO^+}(T)$

$\frac{2p_{O_2}+p_{O}+p_{NO}+p_{NO^+}}{2p_{N_2}+p_{N}+p_{NO}+p_{NO^+}}=.25$

$p_{NO^+}=p_{e^-}$

Furthermore, these equations require the pressure p and temperature T as input, in order to evaluate the equilibrium constants. Hence, these equations clearly demonstrate that, for a given chemically reacting mixture, the equilibrium composition is a function of T and p.

Also, it is important to note that the specific chemical species to be solved are chosen at the beginning of the problem. This choice is important; if a major species is not considered (for example, if N had been left out of our above calculations), the final results for chemical equilibrium will not be accurate. The proper choice of the type of species in the mixture is a matter of experience and common sense. If there is any doubt, it is always safe to assume all possible combinations of the atoms and molecules as potential species; then, if many of the choices turn out to be trace species, the results of the calculation will state so. At least in this manner, the possibility of overlooking a major species is minimized.

Here, in the following figure, we are presenting the air equilibrium composition in different temperature.

Consider a system of oxygen in chemical equilibrium at p = 1 atm and T = 3000 K. Although the following figure is for air, it clearly demonstrates that the oxygen under these conditions should be partially dissociated. Thus, in our system, both $O_2$ and $O$ will be present in their proper equilibrium amounts. Now, assume that somehow T is instantaneously increased to, say, 4000 K. Equilibrium conditions at this higher temperature demand that the amount of $O_2$ decrease and the amount of $O$ increase. However, as explained, this change in composition takes place via molecular collisions, and hence it takes time to adjust to the new equilibrium conditions. During this nonequilibrium adjustment period, chemical reactions are taking place at a definite net rate. The purpose of this section is to establish relations for the finite time rate of change of each chemical species present in the mixture—the chemical rate equations.

Continuing with our example of a system of oxygen, the only chemical reaction taking place is

$O_{2}+M \longrightarrow 2O+M$

where M is a collision partner; it can be either $O_2$ or $O$. Using the bracket notation for concentration, we denote the number of moles of $O2$ and $O$ per unit volume of the mixture by $[O_2]$ and $[O]$, respectively. Empirical results have shown that the time rate of formation of $O$ atoms via the last equation is given by

$\frac{d[O]}{dt}=2k[O_2][M]$

where $\frac{d[O]}{dt}$ is the reaction rate, k is the reaction rate constant, and the last equation is called a reaction rate equation. The reaction rate constant k is a function of T only. The last equation gives the rate at which the reaction given the last equation goes from left to right; this is called the forward rate, and k is really the forward rate constant $k_f$:

$O_{2}+M \xrightarrow[]{k_f} 2O+M$

hence it could be written as

$\frac{d[O]}{dt}=2k_f[O_2][M]$

The reaction that would proceed from right to left is called the reverse reaction, or backward reaction,

$O_{2}+M \xleftarrow[k_b]{} 2O+M$

with an associated reverse or backward rate constant $k_b$, and a reverse or backward rate given by

$\frac{d[O]}{dt}=-2k_b[O]^{2}[M]$

Note that in both forward and backward equations, the right-hand side is the product of the concentrations of those particular colliding molecules that produce the chemical change, raised to the power equal to their stoichiometric mole number in the chernical equation. The forward rate equation gives the time rate of increase of O atoms due to the forward rate, and backward equation gives the time rate of decrease of O atoms due to the reverse rate. However, what we would actually observe in the laboratory is the net time rate of change of O atoms due to the combined forward and reverse reactions,

$O_{2}+M \rightleftharpoons 2O+M$

and this net reaction rate is given by

Net rate: $\frac{d[O]}{dt}=2k_f[O_2][M]-2k_b[O]^{2}[M]$

Now consider our system to again be in chemical equilibrium; hence the composition is fixed with time. Then $\frac{d[O]}{dt}\equiv0$,$[O_2]\equiv=[O_2]^*$ and $[O]\equiv[O^*]$where the asterisk denotes equilibrium conditions. In this case, this equation becomes

$0=2k_f[O_2]^{*}[M^{*}]-2k_b[O]^{*^{2}}[M]^{*}$

or $k_{f}=k_{b}\frac{[O]^{*^2}}{[O_2]^{*}}$

Examining the chemical equation given above, and recalling the substance we discussed , we can define the ratio $\frac{[O]^{*^2}}{[O_2]^{*}}$ in the last equation as an equilibrium constant based on concentrations, $K_c$. This is related to the equilibrium constant based on partial pressures, $K_p$, we derived before. It directly follows for this oxygen reaction that

$K_{c}=\frac{1}{RT}K_{p}$

Hence, it can be written as

$\frac{k_f}{K_b}=K_c$

The last equation, although derived by assuming equilibrium, is simply a relation between the forward and reverse rate constants, and therefore it holds in general for non-equilibrium conditions. Therefore, the net rate, can be expressed as

$\frac{d[O]}{dt}=2k_f[M]{[O_2]-\frac{1}{K_c}[O]^2}$

In practice, values for $k_f$ are found from experiment, and then $k_b$ can be directly obtained from $\frac{k_f}{K_b}=K_c$. Keep in mind that $k_f$, $k_b$, $K_c$ and $K_p$ for a given reaction are all functions of temperature only. Also, $k_f$ in

$\frac{d[O]}{dt}=2k_f[M]{[O_2]-\frac{1}{K_c}[O]^2}$

is generally different depending on whether the collision partner M is chosen to be $O_2$ or $O$.

it is important to note that all of the mentioned formalism applies only to elementary reactions. An elementary chemical reaction is one that takes place in a single step. For example, a dissociation reaction such as

$O_2+M \leftrightarrow 2O+M$

is an elementary reaction because it literally takes place by a collision of an $O_2$ molecule with another collision partner, yielding directly two oxygen atoms. On the other hand, the reaction

$2H_{2} +O_2 \leftrightarrow H_{2}O$

is not an elementary reaction. Two hydrogen molecules do not come together with one oxygen molecule to directly yield two water molecules, even though if we mixed the hydrogen and oxygen together in the laboratory, our naked eye would observe what would appear to be the direct formation of water. This Reaction does not take place in a single step. Instead,

$2H_{2} +O_2 \leftrightarrow H_{2}O$

is a statement of an overall reaction that actually takes place through a series of elementary steps:

$H_{2} \leftrightarrow 2H$

$O_{2} \leftrightarrow 2O$

$H + O_{2} \leftrightarrow OH + O$

$O + H_{2} \leftrightarrow OH + H$

$OH + H_{2} \leftrightarrow H_{2}O + H$

The last five equations constitute the reaction mechanism for the overall reaction of

$2H_{2} +O_2 \leftrightarrow H_{2}O$

Each of the five equation is an elementary reaction.

We again emphasize that the law of mass action is valid for elementary reactions only.

1-John D. Anderson, "Modern Compressible Flow With Historical Perspective", McGraw-Hill Science/Engineering/Math; 3 edition (July 19, 2002)

2-Ken A. Dill, Sarina Bromberg, "Molecular Driving Forces: Statistical Thermodynamics in CHemistry & Biology", Garland Science; 1 edition (September 13, 2002)

Note: I primarily Used the reference number one for this talk. However the second reference is quite a nice book in statistical thermodynamics and it is a self explanatory book on this subject. I highly recommend it for further reading on the subject of statistical thermodynamics.

Please refer to details of the presentation at