Jump to content

Talk:Sackur–Tetrode equation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Various points

[edit]

Hi PAR. (1) Why use E for internal energy when the article on ideal gas uses U? (2) Why not define u=U/kN and v=V/kN for specific energy and volume ? (3) The argument to the logarithm seems not to be dimensionless! Bo Jacoby 13:06, 9 November 2005 (UTC)[reply]

Good questions -
  • U is the correct letter to use according to a IUPAC article and I will change it - Alberty, R. A. (2001). "Use of Legendre transforms in chemical thermodynamics". Pure Appl. Chem. Vol. 73 (8): 1349–1380. {{cite journal}}: |volume= has extra text (help); External link in |title= (help)
  • The reason I use U and V is probably the same reason you use u and v. Its the way I learned the subject, and I don't see an inherent advantage to either notation.
  • The requirement that the argument of a logarithm be dimensionless can be relaxed somewhat since ln(x)+ln(1/x) is dimensionless, even when x is not. If you combine all the logarithms, it should be dimensionless.

Which brings up a point that worries me, and it is not a point for someone who is book-bound, so I know I'm talking to the right person. Even though its not generally done, you should be able to assign the additional dimension of "particle" to certain quantites. N has dimension of "particles" so that N/V has dimensions of "particles per unit volume". Boltzmanns constant k has units of "entropy per particle", so that kT is "energy per particle". Planck's constant has units of "action per particle". I cannot get the argument of the logarithm in the Sackur-Tetrode equation to be dimensionless doing this. The idea that particle=dimensionless is just a mindless hand-me-down rule that I cannot figure out how to disprove nor justify. I have a strong hunch that its ok to assign the dimension "particle" and that my inability to render the argument dimensionless points out that I am missing some subtle point in the physics. PAR 16:55, 9 November 2005 (UTC)[reply]

I am not an expert on the Sackur-Tetrode formula, and it confuses me. I would expect it to asymptotically approach the ideal gas formula, but that is not evident to me. Boltzmanns constant k is joule per kelvin per molecule because PV/T = kN. I am not sure whether pressure should be called P or p. Personally I prefer the big letter, because V and T are big letters, but the article on pressure uses the small letter. Plancks constant h is joule per hertz per particle because E = Nhν. Not only is the unit 'particle' usually omitted, but the unit of angle, turn, is also omitted, leaving confusion on whether it is joule second per turn per particle or joule second per radian per particle, (h-bar). I have had big difficulties in understanding thermodynamics, mostly because the units were messy. (The natural gas industry expresses the gas content of crude oil in normal cubic feet per barrel). There is no name for the SI-unit of entropy, the joule per kelvin. I suggest to call it a clausius. It is also the unit of matter, and of heat capacity. Nor is there a single letter signifying amount of matter: nR=kN=?. I would prefer to get rid of the mol. I prefer to express matter density in clausius per cubic meter, which is the same as pascal per kelvin. In short: I agree that good use of units clarifies science. Bo Jacoby 14:12, 11 November 2005 (UTC)[reply]

The ideal monatomic gas entropy is where Φ is some undetermined constant. The Sackur-Tetrode equation specifies that constant, so the two are completely compatible.

That's nice. But changing Φ merely changes the zero point of entropy. . So what is the physical significance of the formula ? Bo Jacoby 13:32, 14 November 2005 (UTC)[reply]

I've never need to use it practically, so I'm sort of winging it here - If you are dealing with entropy differences, you dont need to know the constant. If you are dealing with enormous entropies (S/Nk huge) then again, no need. If you are dealing with absolute entropy at or near the critical point (S/Nk of order unity) then still no need, it breaks down. But for S/Nk in an intermediate range, the question of what is the constant is important. Check this out.

I think there should be a strong distinction between the ideas of dimensions and units. The speed of light has the dimensions of velocity or distance/time, and has units of m/sec in the SI system, cm/sec in cgi system, feet/sec in Imperial units. It can also be measured in furlongs/fortnight. From a theoretical point of view, who cares about the units? The dimensions are of fundamental theoretical importance, the units are not (except that they tell you the dimensions.) Worrying about units is like worrying about the language a scientific paper is written in. Who cares? as long as its translated into a language you understand. Worrying about dimensions is like worrying about what the paper is saying, and theoretically, this is worth worrying about. Worrying about units is of little theoretical significance (but of huge practical significance, of course.)

The bottom line is that units are vitally important to communication, just as language is. I don't have a favorite language, but I do have one that I am confined to speak in because of my upbringing and mental limitations. I don't wish to be similarly confined by having a "favorite" set of units. Units are just some language I have to learn in order to communicate with others. Dimensions are much more interesting. PAR 16:30, 11 November 2005 (UTC)[reply]

If an author sticks to one system of units, then he needs not distinguish between dimension and unit, because for each dimension there is but one unit in a well designed system of units. (If an author sticks to one language, then he needs not distinguish between concept and word either.) Bo Jacoby 13:32, 14 November 2005 (UTC)[reply]

Yes, the distinction will never be a problem if you live on a desert island. In reality, you have to negotiate the difference between concept and word with other people, and to do so effectively you need to understand the difference between the two. In my mind, I try to deal with concepts. The process of translating these concepts to words is extremely negotiable in my mind, whatever it takes to communicate. Which means I have little respect for "proper english" while at the same time I strive to be adept at it. I have little respect for units either, yet I always try to get them right.

Thats why I like the topic of dimensional analysis especially the Buckingham Pi theorem - one of its basic tenets is that all physical theories must be expressible in dimensionless terms in order to have any validity. That says it all! PAR 17:04, 14 November 2005 (UTC)[reply]

I too like dimensional analysis. The logarithm of the speed of light is log(300000ms-1)=5+log(3)+log(m)-log(s). The logs of units define a basis of a vector space. Changing basis is changing units. The dimension length is the line {log(x)+log(m) | x in R}. The dimension area is the line {log(x) + 2log(m) | x in R}. The basis of a vector space is more basic than these lines. Thank you for pointing my attention to the Buckingham Pi theorem. Note that it can be expressed in units as well as in dimensions. Bo Jacoby 15:22, 15 November 2005 (UTC)[reply]

If you go back to the pre-SackurTetrode equation(leave out the N in V/N) then you get a dimensionless argument for the logarithm. Reason being:the argument is the ratio of two phase-space hypervolumes which yields the number of microstates consistent with the macrostate description. Dividing by N! spoils this. I don't believe the problem arises in quantum statistics because there you deal with the number of states from the outset.--GRaetz 17:25, 21 January 2006 (UTC)[reply]

There is a problem with the Gibbs paradox derivation as well. You have two conditions holding in the 6N dimensional phase space - A fixed energy:
and the gas is contained in a box:
where i is the i-th particle, and j=1,2,3 corresponding to x,y,z. U is energy, m is particle mass, p is momentum, x is position, L is the length of the edge of the box containing the gas. The first condition specifies the (3N-1)-dimensional surface of an 3N-dimensional sphere in the momentum part of the space, while the second specifies a 3N-dimensional volume in the space part. A 6N-dimensional volume in this phase space should have dimensions of or action3N. The product of the above surface and volume does not give such a volume in phase space, and you can't divide by and get a dimensionless number. I've looked at Huang, and he (mistakenly, I believe) uses the volume of the hypersphere, rather than its surface area. Maybe I have something wrong, but I would like to know what it is. PAR 01:14, 22 January 2006 (UTC)[reply]

Derivation

[edit]

Do other users think that we should include the derivation from the Schrodinger equation? It is relatively simple. Tiberius Curtainsmith —Preceding undated comment added 18:55, 30 July 2009 (UTC).[reply]

Information Theory

[edit]

The phrase, "Making sense of something using X" might imply to a neophyte that X is needed to make sense of something. Many can "make sense" of the equation through the thermodynamic or the information theory perspectives. The article does not signal that entropy has these different perspectives (even though it is obvious to experts that the equation was originally derived in the ... perspective). I will change the title and first sentence of the section on information theory to signal the two interpretations. Each perspective has its champions, but the enthusiasm of their POV should not be reflected in titles. Laburke (talk) 13:33, 18 September 2011 (UTC)[reply]

I nominate that self/book-promotional section for deletion. "Interpretation of the equation through information theory" Kotika98 (talk) 03:24, 18 May 2015 (UTC)[reply]
I wrote the section, and have restored it, with modifications. I've changed the narrative to be factual, rather than a "he did this and that" kind of story. Point taken. This is not a "POV" kind of thing, its a clear mathematical derivation of the equation using information theoretic concepts. I have no desire to promote any book, and I am certainly not the author of the book. Never met or communicated with him. PAR (talk) 06:24, 29 December 2015 (UTC)[reply]
It is not clear how an "information theoretic concept" put together with a "mathematical derivation" could possibly produce the "quantum mechanical uncertainty principle" term with entropy equal to -3N ln(h). I could be convinced if such a term had an independent meaning in some other context outside of the S-T formula. AFAIK it does not have an independent meaning, and in fact it is nonsense altogether, since the S-T formula is incapable of producing S(T=0)=0 which would indeed fix the additive constants such as the 5/2 and/or ln(h) as it is claimed here. Kotika98 (talk) 15:51, 21 April 2016 (UTC)[reply]
Once again, that section is total nonsense, I have had a lengthy exchange with the author of that section, "PAR" without a satisfactory resolution. He/she has since added "explanations" which amount to original research to try and reconcile the gibberish found in the crackpot book with common sense. Since I am a grownup and will not get dragged into a revert war, someone else should delete this section, preferably someone with a real name and identity. Kotika98 (talk) 06:35, 8 January 2018 (UTC)[reply]
That section, as currently written, appears to be coherent and factually correct. All portions of the equation "make sense" and have the correct values. The algebraic manipulations appear to be correct. If there is something specifically incorrect or wrong, you have to point it out. If something feels incoherent, or sounds like gibberish, you have to point that out, too. As written, I see nothing wrong, and so levying accusations of "total nonsense" is uncalled for. 67.198.37.16 (talk) 20:59, 19 May 2024 (UTC)[reply]

Derivation from uncertainty principle and relation to information entropy

[edit]

Volume and energy are not the direct source of the states that generate entropy, so I wanted to express it in terms of x*p/h' number of states for each N. Someone above asked for a derivation from the uncertainty principle ("the U.P.") and he says it's pretty easy. S-T pre-dates U.P., so it may be only for historical reasons that a more efficient derivation is not often seen.

The U.P. says x'p'>h/4pi where x'p' are the standard deviations of x and p. The x and p below are the full range, not the 34.1% of the standard deviation so I multiplied x and p each by 0.341. It is 2p because p could be + or - in x,y, and z. By plugging the variables in and solving, it comes very close to the S-T equation. For Ω/N=1000 this was even more accurate than S-T, 0.4% lower.

Sackur-Tetrode equation:

where

Stirling's approximation N!=(N/e)^N is used in two places that results in a 1/N^(5/2) and and e^(5/2) which is where the 5/2 factor comes from. The molecules' internal energy U is kinetic energy for the monoatomic gas case for which the S-T applies. b=1 for monoatomic, and it may simply be changed for other non-monoatomic gases that have a different K.E./U ratio. The equation for p is the only difficult part of getting from the U.P. to the S-T equation and it is difficult only because the thermodynamic measurements T (kinetic energy per atom) and V are an energy and a distance where the U.P. needs x*p or t*E. This strangeness is where the 3/2 and 5/2 factors come from. The 2m is to get 2*m*1/2*m*V^2 = p^2. Boltzmann's entropy assumes it is a max for the given T, V, and P which I believe means the N's are evenly distributed in x^3 and assumes all are carrying the same magnitude p momentum.

[edit: see information theory entropy talk page for a better way]

To show how this can come directly from information theory, first remember that Shannon's H function is valid only for a random variable. In this physical case, there are only 2 possible values that each phase space can have: with an atom in it carrying energy, or not, so it is like a binary file. But unlike normal information entropy, some or many of the atoms may have zero momentum. The only requirement is that the total energy be the same, so the physical system has more freedom of choice than you would expect. Physical entropy can use anywhere from N to 1 symbols (atoms) to carry the same message (the energy), whereas information entropy is typically stuck with N. The modified Shannon H shown below is the sum of the surprisals and is equal to the information (entropy) content in bits. The left sum is the information contributions from the empty phase space slots and the right side are those where an atom occurs. The left sum is about 0.7 without regard to N (1 if ln() had been used) and the right side is about 17*N for a gas at room temperature (for Ω/N ~ 100,000 states/atom):

Physical entropy S then comes directly from this Shannon entropy:

A similar procedure can be applied to phonons in solids to go from information theory to physical entropy. For reference here is a 1D oscillator in solids:

Ywaz (talk) 02:37, 15 January 2016 (UTC)[reply]

Have you read the Ben-Naim book? - it is excellent. I looked at the above and the approximations in σ worry me. Ben-Naim did it quite straightforwardly, the article only states the results. PAR (talk) 06:58, 25 January 2016 (UTC)[reply]
I looked at U.P. and saw it covers only 34.1% of the possible positions for each of x and p, so I naively applied σ to each, and surprisingly it allowed the (4*pi/3)^3/2 to be removed from the equation. Apparently the π and sqrt(2) in the normal distribution that determine σ already have the necessary "spherical" math in it. In any event, I used a shorter reasoning process, had fewer variables in the result (smaller Kolmogorov complexity), and it gave the right answer to a high degree of precision. If I had used more instead of fewer constants, my reasoning would be be much more suspect. In whatever way it deviates from the exact proper physics, it appears to be an OK mythology with only some minor loss in the simplification of the ideas. I am applying the de Broglie wavelength to get the number of possible states. I found a reference that refers Ben_Naim doing the same. Ywaz (talk) 13:06, 25 January 2016 (UTC)[reply]
In the references, the Ben-Naim reference links to a preview of his book, which might be useful. PAR (talk) 15:38, 27 January 2016 (UTC)[reply]

In arbitrary dimensions?

[edit]

Would be nice to see the equivalent eqn in arbitrary dimension d, instead of just d=3. 67.198.37.16 (talk) 21:06, 19 May 2024 (UTC)[reply]