Jump to content

User:Chjoaygame/sandbox/archive 1

From Wikipedia, the free encyclopedia

Tsirel[edit]

I presuppose that I am slow- and dim- witted, and ignorant to boot. This is partly just a fact of life (I know myself), and partly because I am very concerned that, in this area, the obvious may be deceptive or misleading or already misled or erroneous. The stakes are high, and the risk of mistake is correspondingly high. What is obvious to me today might be mysterious, exotic, or nonsensical to me tomorrow. What is obvious to you or to Erdélyi will often seem mysterious or nonsensical to me if I have not patiently and carefully considered it, or if there is some slip-room of unshared presupposition or language. Therefore I am inclined to state the obvious, and even to state it in several different ways, at every step. I think it likely to be the quickest and safest way. I ask your patience in this.

1.[edit]

No one says that Bell inequalities follow from probability theory. Surely not. (Otherwise they would be satisfied by the quantum theory as well.)

Agreed. In more detail, I think as follows.

You are pointing out that the conclusions should rely not solely on pure mathematics, but also on physical assumptions. In your sentence, "follow from probability theory" means 'can be derived, without further premises, from mathematical probability theory alone'. Of course you are right about that.

For physically useful conclusions, one needs both (1) the mathematical theory that was the main topic of Kolmogorov's book, and (2) the physical model, with its presupposed metaphysics, that presents one's view of the real world and its relation to experimental data. To get a sound conclusion, one also needs (3) to fit and match (1) and (2).

Kolmogorov, page 3:

§ 2. The Relation to Experimental Data4
     We apply the theory of probability to the actual world of experiments in the following manner:
     1) There is assumed a complex of conditions, ₡, which allows of any number of repetitions.
________________________
4 The reader who is interested in the purely mathematical development of the theory only, need not read this section, since the work following it is based only upon the axioms in § 1 and makes no use of the present discussion. Here we limit ourseves to a simple explanation of how the axioms of the theory of probability arose and disregard the deep philosophical dissertations on the concept of probability in the experimental world. In establishing the premises necessary for the applicability of the theory of probability to the world of actual events, the author has used, in large measure, the work of R. v. Mises, [1] pp. 21-27.

Kolmogorov, page 9:

In consequence, one of the most important problems in the philosophy of the natural sciences is—in addition to the well­ known one regarding the essence of the concept of probability itself—to make precise the premises which would make it possible to regard any given real events as independent. This question, however, is beyond the scope of this book.[1]

Gnedenko, page 15:

C H A P T E R    I

THE CONCEPT OF PROBABILITY

§ 1. Certain, Impossible, and Random Events
On the basis of observation and experiment science arrives at the formu­lation of the natural laws that govern the phenomena it studies. The simplest and most widely used scheme of such laws is the following:
      1. Whenever a certain set of conditions ₡ is realized, the event A occurs.[2]
  1. ^ Kolmogorov, A.N.(1933/1956). Foundations of the Theory of Probability, translated from the Russian second edition to English by N. Morrison, second English edition, Chelsea, New York.
  2. ^ Gnedenko, B.V. (1962). The Theory of Probability, translated from Russian by B.D. Seckler, Chelsea, New York.

We need the "complex of conditions, ₡", the physical account.

My present focus on "the logic" means that I think the fault in Bell doctrine is that it does not supply (3) a proper fit of the physical account (2) to the rules (1) that are set out as the main topic of Kolmogorov's book. If one misapplies those rules in getting one's conclusions, then one's complex of conditions, ₡, one's physical account, becomes irrelevant to one's conclusions, because then one's conclusions, read with understanding, are nonsense and refer to nothing real.

For definiteness, I can add that, loosely speaking, I have no urgent burning quarrel with Bell's physical assumptions, his physical account, which he will eventually discredit by deriving a contradiction. My concern is that he doesn't properly fit the physical account to the rules of probability theory; that's what I mean by 'faulty logic'. Because of faulty logic, the physical assumptions are not eventually tested, and are consequently irrelevant to the conclusions.

My commentary on this point 1. proves nothing. It states, however, an agenda of what I need to do here.

2.[edit]

No one says that independent existence of spatially separated subsystems implies their statistical independence. (Otherwise all correlations would be zero in Bell framework; but Bell inequality is CHSH ≤ 2, not CHSH = 0.) It is good to have this stated explicitly by you.

I have to admit that I was suspicious, likely mistakenly, that Bell slipped in something close to the confusion that your statement here rules out. This was perhaps careless or mistaken of me, but at this moment I am not sure of the extent of such carelessness or mistake.

This also clarifies for me, I think, the meaning of your phrase "shared randomness". It means what Bell symbolizes by λ, in effect the "hidden variables", appearing in his expectation formula as second argument of both his A(⋅,⋅) and his B(⋅,⋅), and as the argument of his ρ(⋅), and as the integration variable.

I think my concern might be expressed by the proposition that Bell's ρ(⋅) ought properly to be

second law statement[edit]

The second law of thermodynamics is about transfers of matter and energy that can actually occur between bodies of matter and radiation. Each participating body is initially in its own state of internal thermodynamic equilibrium. The bodies are initially separated from one another by walls that obstruct the passage of matter and energy between them. The transfers are initiated when some external agency makes one or more of the walls less obstructive.[1] The transfers establish new equilibrium states in the bodies.

The law states that the sum of the entropies of the participating bodies always increases.

In an idealized limiting case, that of a reversible process, this sum remains unchanged.

The transfers invariably bring about spread, dispersal, or dissipation[2] of matter or energy, or both, amongst the bodies. They occur because more kinds of transfer through the walls have become possible.[3]

If, instead of making the walls less obstructive, the thermodynamic operation makes them more obstructive, there is no effect on an established thermodynamic equilibrium.

The second law describes irreversibility in thermodynamic processes that actually occur. It is because of the asymmetric character of thermodynamic operations, not because bodies have internally irreversible microscopic properties. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties.

Of course, thermodynamics relies on presuppositions, for example the existence of perfect thermodynamic equilibrium, that are not exactly fulfilled by nature.

  1. ^ Pippard, A.B. (1957/1966), p. 96: "In the first two examples the changes treated involved the transition from one equilibrium state to another, and were effected by altering the constraints imposed upon the systems, in the first by removal of an adiabatic wall and in the second by altering the volume to which the gas was confined."
  2. ^ W. Thomson (1852).
  3. ^ Pippard, A.B. (1957/1966), p. 97: "it is the act of removing the wall and not the subsequent flow of heat which increases the entropy."

Introduction to entropy[edit]

While it is not necessary, I think of the (first) entropy of mixing problem as a container with two different (let's say ideal) gases separated by a partition which is then removed, the temperature and pressure being held constant throughout the process. Am I correct in saying that your view of entropy as dispersal refers not just to dispersal of energy, but of mass as well?

If this is the case, then we have to consider a case where an experimenter is unable to distinguish the difference between the two gases. It is an interesting fact that, in that case, there will be no experimental contradictions of classical thermodynamics nor its statistical mechanical explanation and the change in entropy will be zero. How does your interpretation of entropy deal with this? My interpretation of entropy as "a measure of what we don't know suffers here as well, and has to be amended to saying that entropy is "a measure of what we know we don't know". PAR (talk) 13:37, 5 November 2020 (UTC)

I didn't invent the idea of spreading, and I think you didn't invent the idea of the ignorance interpretation.
Let me think about the problem in which there is "a container with two different (let's say ideal) gases separated by a partition which is then removed".
The system is compound, consisting of two subsystems, jointly isolated from the outside world by impermeable walls, and separated from one another by an immovable and impermeable wall. The thermodynamic operation is the making of the separating wall perfectly permeable to all matter and energy, practically the same as removing the separating wall. For the present, I would like to keep the isolating wall. I will let the matter in the compound system make up its own mind about its pressure and temperature. Initially, the subsystems have respective entropies and . The final system, without an effective separating wall, will have an entropy . The second law says that . For definiteness, initially, the subsystems have respective volumes and , and respective internal energies and . The final system, without an effective separating wall, will have a volume and an internal energy . Combined with the isolating wall being kept, the notions of volume and internal energy entail that and .
Let us also assume that the temperatures and pressures of the two initial subsystems were the same, and .
If the chemical compositions of the two initial subsystems were the same, it seems reasonable enough to expect that the final temperature and pressure will also be the same, and , and that .
We might then say that there has been no transfer, and that the process has been null or trivial?
Has there been any spreading?
It seems reasonable to impose a convention saying that there has been no spreading?
If the chemical compositions of the two initial subsystems were different, we could reasonably expect that some spreading would occur to produce the final state, and that  ?
At this moment I am not clear whether this account addresses the relevant questions, or about the question of spread of matter or energy. Let me think about it.Chjoaygame (talk) 17:21, 5 November 2020 (UTC)
Thermodynamics is about experiments that can be done, not about word games. In thermodynamics, I think this entails 'If and only if the operator doesn't know how to distinguish the gases, he can't do experiments that depend on the distinction'. If the difference between the gases is very small, then it may take a long time to do an experiment that depends on the difference. Thermodynamics is very patient, expecting times on the order of the Poincaré recurrence time. The statistical assumption that the gases are distinct, translated into a thermodynamic context, means that the experimenter can tell the difference and use his knowledge to do experiments. In a defined context, or universe of discourse, it is self-contradictory to propose that the gases are different but the experimenter can't tell the difference. To say that the gases are different but the experimenter can't tell the difference, is to move the goalposts when the ball is in flight. Not a useful move in a scientific discussion. Does this address the problem?Chjoaygame (talk) 13:22, 6 November 2020 (UTC)
It does address the problem, but I don't think it is self contradictory to propose that the gases are different but the experimenter can't tell the difference. There is no logical problem in assuming hypothetically that an experimenter does not have access to certain aspects of reality. It's what statistical mechanics is all about: coming up with a theory of a system without knowledge of its microstate. Also note that classical thermodynamics has no problem with the quantum effects with regard to the specific heat. A thermodynamics experimenter 100 years ago did not know about these effects, yet their thermodynamic theory was entirely consistent: They made measurements, used the 4 laws, and were able to predict the outcome of future experiments. With the mixing example, the same is true of an experimenter which cannot distinguish between the two gases. The thermodynamic theory and the statistical mechanical explanation is entirely consistent, with an entropy change of zero. Once they become distinguishable, the entropy change is no longer zero, and, again, the thermodynamic theory and the statistical mechanical explanation are entirely consistent.
P.S. Of course you are right, I never meant to imply that I invented the idea :) PAR (talk) 13:30, 7 November 2020 (UTC)

discussion 1[edit]

Why do I object to saying that the gases are different but the experimenter can't tell the difference? It's a matter of universes of discourse, in other words, a matter of context. For the present, let us admit three contexts, or universes of discourse: (a) the thermodynamic, (b) the statistical mechanical, and (c) the universe in which the gases are different but the experimenter can't tell the difference.
In universe (a), there is a tightly defined ontology. There are systems, surroundings, transfers, processes, state variables, and so on. This universe goes along nicely without mention of observers, knowledge, or suchlike, and without mention of molecules, mean free paths, or suchlike, and without mention of probabilities and suchlike.
In universe (b), there are molecules, mean free paths, gross molecular energies, and suchlike, and probabilities and suchlike, and one may use ideas about observers' ability to distinguish molecules; optionally this universe might admit mathematical entities such as Shannon quantity of information, at the discretion of the ruler of the universe.
Universe (c) is a richer place, with a more generous ontology that includes the entities of universes (a) and (b), and that might perhaps include plenty of other entities nominated ad lib by the ruler of the universe.
It is for pedagogical reasons that I am proposing to restrict us to discourses on just universes (a) or (b). If one wants to discuss universe (c), one is entitled and free to do so, but I think it would not be good pedagogy in the present context.Chjoaygame (talk) 14:27, 7 November 2020 (UTC)
You are using terms that I am not comfortably familiar with, so forgive me if I miss the point. It seems to me it boils down to "Let's not think about it" for "teaching purposes" (pedagogy).
Teaching it is secondary to understanding. If your understanding of a subject is limited, you ability to teach it is ultimately limited as well. I am ultimately interested in understanding the situation and do not wish to limit that understanding for "pedagogical reasons". You speak of "ontologies" which I think means a universe of things or possibilities that are posited to exist. I am comfortable with an ontology that includes the scientist who cannot distinguish between what we posit to be two different gases. I am comfortable with the idea of "Wigner's friend" in which one scientist is inside Schroedinger's cat's box, and knows more about the state of the cat than a scientist outside the box, who does not. I am comfortable with the idea of a microstate, which a researcher cannot access, other than its manifestation as a macrostate, but which we posit knowledge of, in order to show that his probabilistic calculations are correct. I am somewhat comfortable with the idea that quantum mechanics cannot predict a particular outcome of an experiment which we have posited the initial conditions as well as possible.
I don't see the discontinuous jump in entropy that occurs once a researcher discovers a way to distinguish between the two gases as a problem. It is simply another example of entropy being a measure of "what we know we don't know". The thing that puzzles and facinates me is the connection between this almost subjective quantity to the objective thermodynamic entropy. PAR (talk) 06:23, 8 November 2020 (UTC)
"I am ultimately interested in understanding the situation and do not wish to limit that understanding for "pedagogical reasons"."
My reason for going for smaller or more tightly restricted universes, for pedagogy, is that it helps the beginner to see and value such distinctions as those between thermodynamics and statistical mechanics and informatics. The different universes have distinct axiomatics and methods of reasoning. I think keeping them distinct makes the logic clearer. 'X' is a thermodynamic statement, 'Y' is a statistical mechanical statement, 'Z' is an informatic statement. 'X is explained by Y' is from a more abstract universe. 'X, Y, Z are strongly analogous' is from a yet more abstract universe. You are capable of a strictly thermodynamic argument because you have long known and are familiar with the boundaries of the universes, but the beginner is subject to efforts to hide them, and his understanding can be helped by their flagging. For me, observing the boundaries enhances understandings. Learn inch by inch.
"The thing that puzzles and facinates me is the connection between this almost subjective quantity to the objective thermodynamic entropy." I think it helps to view almost subjective quantities and objective thermodynamic entities as assignable to distinguishable universes of discourse. To assign them to a common universe is a considerable and notable logical move. It calls for careful consideration of the structure and axiomatics of the common universe, which we may flag for the beginner.Chjoaygame (talk) 06:37, 8 November 2020 (UTC)
I agree, the boundaries are vital to really understanding things, and I see no problem introducing those boundaries to a beginner. They are simple and easily understood. There is a difference between learning to drive a car (phenomenological, like thermodynamics), and poring over the blueprints and exploded parts diagrams that explain why the car behaves as it does (like statistical mechanics, except the car diagrams predate the car, and were not obtained by analyzing the car). Here is a quote from Einstein that sticks in my head (all emphasis mine):

When we say that we UNDERSTAND a group of natural phenomena, we mean that we have found a CONSTRUCTIVE theory which embraces them. But in addition to this most weighty group of theories, there is another group consisting of what I call theories of PRINCIPLE. These employ the analytic, not the synthetic method. Their starting point and foundation are not HYPOTHETICAL constituents, but EMPIRICALLY OBSERVED general properties of phenomena, principles from which mathematical formula are deduced of such a kind that they apply to every case which presents itself. Thermodynamics, for instance, starting from the fact that perpetual motion never occurs in ordinary experience, attempts to deduce from this, by analytic processes, a theory which will apply in every case. The merit of CONSTRUCTIVE theories is their comprehensiveness, adaptability, and clarity; that of the theories of PRINCIPLE, their logical perfection, and the security of their foundation.

I'm quite sure the "constructive theory" associated with thermodynamics is statistical mechanics.
I disagree with the idea that informatics and statistical mechanics are separate universes. I see statistical mechanics as a sub-universe of informatics. Statistical mechanics makes certain mathematical assumptions about a particular system under consideration, and then applies mathematical information theory to obtain results.
On another topic, I realized that my "ignorant scientist" objection to the concept of energy/mass spreading is not valid. If the ignorant scientist cannot distinguish between the two gases, then there is no detectable spreading of mass, and the entropy change is zero. So if I am going to come up with a valid objection to the mass/energy spreading concept, I will have to look elsewhere. I have to think about it. PAR (talk) 19:31, 8 November 2020 (UTC)
principle-------analytic---empirical------thermodynamics
constructive-synthetic-hypothetical-statisticalmechanicsChjoaygame (talk) 00:03, 9 November 2020 (UTC)
As for I disagree with the idea that informatics and statistical mechanics are separate universes. I see statistical mechanics as a sub-universe of informatics. I think that the choice of which universe is to define the discourse is at the arbitrary aesthetic discretion of the relevant interlocutors, who are jointly the rulers of that universe.Chjoaygame (talk) 04:57, 11 November 2020 (UTC)

For lack of any definitely better venue to say the following to you, I will say it here. I reserve for myself a privilege of arbitrarily changing my mind about the following.

Your comments have helped me to recognise the following. You have led me to see that the article Entropy (energy dispersal) has a seriously faulty title. Till you raised questions about it for me, my thinking was loose and slipshod. I was just guided by the authority of Guggenheim, one of the few reliable sources on this subject. The trouble is brought into focus by the following sentences in that article.

In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature.

Till our conversation, I have not thought that I understood those sentences. I was puzzled by them. Now I am inclined to guess that they are nonsense. I guess that the article was started by a fellow called Lambert. Till I took an interest in it, I am inclined to guess that it didn't mention Guggenheim. I will need to look a little at the history of the article, and at some writings of Lambert on the topic.

Led to the light by your thinking, I now believe that energy dispersal can sometimes be a fair interpretation, and that matter dispersal can sometimes be a fair interpretation. But neither is generally adequate. A generally and rigorously fair interpretation is that of Guggenheim. It is dispersal in an abstract 'space' of microscopically specified states. The physical problem is to define that 'space' and those states in such a way that a system in its own state of internal thermodynamic equilibrium accesses the relevant states with uniformly distributed probability. The usual specifications solve the problem. That is why we regard them as sound, valid, and correct.

If you like, perhaps we might join forces try to change the title? And accordingly change the contents of the article. For a start, I propose that a better title would be Entropy (micro-state dispersal).

Your thoughts?Chjoaygame (talk) 04:49, 11 November 2020 (UTC)

Looking at a few papers that advocate the 'spread' interpretation, I find it a little odd that they cite Denbigh, but not Guggenheim.Chjoaygame (talk) 18:19, 11 November 2020 (UTC)

Well, that is one of the reasons I respect your arguments and edits, even though I may disagree with them, and even though I have to go running to my dictionary to interpret some of them. Also I have found some of my disagreements with you to be baseless, so it has worked both ways. When you have a superior argument, I try to immediately adopt it as my own and continue forward, and I think you do the same. Regarding Lambert and his argument-less acolytes, I had a run-in with him a few years ago on the Entropy article. We might be in for a battle on that one. PAR (talk) 21:40, 11 November 2020 (UTC)
Looking a little more at some Lambert stuff, and noting that you think we might be in for a battle, I am now inclined to leave it.Chjoaygame (talk) 21:47, 11 November 2020 (UTC)
Follow up. I mentioned clout. Also relevant are this and this.Chjoaygame (talk) 23:08, 30 November 2020 (UTC)
It has boiled down to clout!Chjoaygame (talk) 20:21, 8 December 2020 (UTC)
I can't resist quoting Guggenheim in his 1949 paper on this topic. "It has been said that the entropy of the universe must always increase. The author, being neither philosopher nor cosmologist, is not competent to express any opinion as to whether the universe is an isolated system or even whether this question has any meaning. It is safer to confine one's remarks to systems known to be bounded."Chjoaygame (talk) 00:31, 1 December 2020 (UTC)