Talk:Gibbs measure

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled[edit]

Still many things to add: discussion on existence/uniqueness; phase transitions; alternative definition for translation invariant states using the thermodynamic formalism, etc. One should also make it more readable ;) . --YVelenik 16:14, 5 November 2005 (UTC)

Actually, there are many articles that cover this topics. This article needs to wikilink to the other articles. It needs to be tremendously simplified. In particular, it should possibly be merged with/unified with partition function (mathematics). linas (talk) 18:49, 28 August 2008 (UTC)[reply]
I agree that it should be simplified, and possibly merged with other topics. Notice though that there does not seem to be any discussion of infinite volume Gibbs measures anywhere (well, I couldn't find one, at least). Concerning the rating, the topic discussed here is certainly minor from the point of view of Physics, but it is of great importance in Probability theory (and mathematically rigorous Statistical Physics)! Maybe it should be moved from Physics to Mathematics... Notice also, that the framework of Gibbs random fields considerably extends other well-known notions such as Markov random fields (the importance of which you emphasize in other talk pages ;) ). —Preceding unsigned comment added by 84.73.61.129 (talk) 12:39, 30 August 2008 (UTC)[reply]

The way the article is written down at the moment (esp. the more introductory parts), it looks as though Gibbs measures and Markov random fields are the same thing. But that's not the case. The concept of Gibbs random field is more general, as it does not require the Markov property (in its usual meaning), thus allowing for (suitable) infinite range interactions in the potential. What replaces the Markov property are the DLR equations (which are stated in the lattice section). This should be stressed, maybe after the Hammersley-Clifford theorem (which is restricted to finite collections of random variables). Also notice that to write that the probability of the state to be x is given by the Boltzmann factor makes only sense for finite collections of random variables. Infinite collections are very important in applications to Statistical Physics and Probability Theory, as it is only for those that a given potential can give rise to several states (several solutions to the DLR equations), i.e. that first-order phase transitions can occur.--129.194.8.73 (talk) 08:52, 15 March 2009 (UTC)[reply]

You are absolutely right. Feel free to edit the article: be bold. (Someone else will clean up later in case there's a mistake.) Shreevatsa (talk) 16:52, 15 March 2009 (UTC)[reply]

Rewrote the top matter completely. I think it is important to distinguish Gibbs measures, which are a tool for studying infinite systems, from Gibbs distributions, which apply only to finite systems. The previous version of the article conflated the two. Only the latter topic should be merged with canonical ensemble or partition function; the former deserves its own article. I plan to add more on phase transitions, symmetries, and pure states (extreme points in the space of Gibbs measures) when I have the time, but maybe others will also contribute. Eigenbra (talk) 20:18, 14 August 2014 (UTC)[reply]

Puzzling sentence[edit]

In the second paragraph:

any probability measure that satisfies a Markov property

What does this mean? I thought a Markov property applied to stochastic processes, not to measures. 178.38.78.134 (talk) 01:34, 22 January 2015 (UTC)[reply]

A stochastic process is a probability measure describing the possible outcomes, no? Eigenbra (talk) 02:17, 22 January 2015 (UTC)[reply]
I usually think of it as a family random variables indexed by time. But I understand now. The process is actually a spatial process on an infinite lattice (or other infinite graph) or possibly on R^n (??), in other words an ensemble of fields, and Markov refers to the locality of the statistical dependencies.
A few more general points:
(1) Unless I've missed something, it's nearly impossible to figure out that the thrust of "Markov property" here is the finiteness of the interacting neighbor set. You also can't get it by reading Markov property, Markov process, Markov chain, or Markov random field. In the first three, the emphasis is on time series, and in the last one, there is no special treatment of spatially infinite random fields and no requirement that the "neighborhood" in the definition of local Markov property should be finite (locally finite degree of the graph) -- though this is surely intended in treatments such as this one.
(2) The current article does not give a usable definition of the Gibbs measure. The second paragraph comes close, but it rather dense for a non-expert (and too dense for the introduction), yet at the same time, not formal enough to be a real definition.
(3) It should be stated in the introduction that the "systems" under consideration live on infinite graphs (or on R^n, but maybe this is too hard so far?), and the "states" are (random) fields or functions on these graphs, with values in some fixed space. This way the reader learns what entities the Gibbs measure is a probability measure ON, or ensemble OF. The "systems" and "states" are otherwise quite mysterious. As it stands, the article characterizes a Gibbs measure by how it differs from the canonical ensemble, but doesn't establish the underlying setup. The article should stand more on its own.
178.38.78.134 (talk) 02:27, 22 January 2015 (UTC)[reply]
Sure, you can take the value at each lattice site as a random variable, and then you'll have a (spatial) "process". I concur with your diagnoses, and encourage you to be bold. Eigenbra (talk) 02:40, 22 January 2015 (UTC)[reply]