Talk:Scale invariance

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled[edit]

I've removed the expert tag after some alterations to clean up, correct and clarify. Still more to do, and I will return at a later point--Jpod2 15:28, 9 August 2006 (UTC)[reply]

I need to think more about scale-invariance in fluid mechanics--any fluid dynamicists would be welcome--Jpod2 17:19, 10 August 2006 (UTC)[reply]

(Just a note: I think I saw a reference to rock-paper-scissors game in this page. I'm not able to evaluate whether it's unintentional, somebody trying to be funny, or causes the data to be wrong. But please check). -- esap — Preceding unsigned comment added by 91.153.107.133 (talk) 17:56, 17 December 2016 (UTC)[reply]

Order of development[edit]

I think the results on scale invariance in fluid dynamics are pretty recent; I saw this discussed in an article in "Physics Today" only half a year ago.

As to structure of this article: it should probably discuss scale invariance in random networks first, as I think the theory is "easier", and has relevance to practical things (like oil exploration (percolation), the tearing/shearing of materials, which includes the dielectric breakdown, the filtering of not only oil through rock, but e.g. industrial aggregates through filters in general -- and generally self-organized criticality, which wikilinks back to this article a lot. I'm not sure I got the impression that diffusion-limited aggregation falls into this class as well.). The random-network theory makes abundant use of the renormalization group, and my impression was the random-network theory also has lots of cross-over to CFT. Next, I would argue that the classical ideas of phase transitions in stat mech should be discussed. There are results on scaling for the so-called Category:Exactly solvable models e.g the XY-model, the Ising model, etc. Last would come CFT. I know little about CFT, but have gotten the impression that it unifies some of the results on random networks, as well as some of the results on the exactly solvable models. Is this correct? linas 14:40, 21 August 2006 (UTC)[reply]

Hi. I'm going to respond in subsections, hopefully that will make things clear!

Fluid dynamics[edit]

When I started editing this page a few weeks ago I decided that as well as restructuring and adding material, that I would retain (and clarify) most of the existing topics that editors had decided should be in this article. One of these topics was fluid dynamics, though what was then on this page was somewhat vague. Anyway, I doubt what I have said about fluid dynamics is novel. I am merely recasting the navier-stokes equation in the language of classical field theory, and it is then quite straightforward to deduce whether the equation is invariant under a rescaling.
(Though, there is likely much more to say about scale-invariance in navier-stokes, depending on the precise form of the equations---the choice of an isothermal ideal gas equation of state is but one example). I think from discussions with fluid dynamicists that what I have said is uncontroversial, though it would be useful to say more.
There are *two* senses to the idea of scale invariance in fluid dynamics. One is the simple scaling of laminar flows and viscosity with regard to object size. This has been known for a century, and is why scale models in wind-tunnels work to model full-size aircraft. The other (and more important) meaning refers to the size scales at which vortices appear, in relation to the number of vortices at that size scale. More generally its any feature of turbulence with regard to the size scale. This second meaning is where some of the ideas from stat mech and QFT cross over into. linas 15:38, 21 August 2006 (UTC)[reply]
Oh, sure. I understand--I thought you meant that what I had written there was somehow modern, which it is not. The connections with stat mech and CFT are surely interesting---but in the section on classical scale invariance, I think I just wanted to discuss the classical scaling relations.

Structure[edit]

I think I disagree with you that it should begin with scale invariance in random networks, but perhaps this reflects our respective backgrounds. For me, the most basic aspect of scale invariance is as a symmetry in classical physics. Although the topics you discuss all exhibit scale invariance, I think we would risk the article become a jumble of links to different topics, with no overriding principles made clear.
My vision for the page was to discuss classical physics (scaling of coordinates, scaling dimensions of fields) first, with examples. Much of what you refer to, in particular phase transitions and related concepts, are really examples of scale-invariant statistical field theories. These are rather similar in form to quantum field theories, and there is certainly a lot more to say about them. But surely we should discuss scale-invariance in classical field theories first before going on to discuss scale invariance in quantum and statistical field theories? Do you agree?
As I've said (and I intend to make the statement more rigorous either here or on the CFT page), almost all scale-invariant QFTs are also conformal field theories. So really CFTs are the proper language to describe the second-order phase transitions you refer to. One of the examples you mention is the Ising model. Its properties are described by the Wilson-Fisher fixed point in a scalar field theory with a Z_2 symmetry. The scaling dimensions in the corresponding CFT are of course just the Ising model critical exponents. So yes, CFT is often the right language to understand the models you refer to.
(As for what CFT really *is*, well essentially it is the above---a scale invariant quantum field theory. But of course there is much more to say about them, as I'm sure you know. Which is why the CFT page needs work!)--Jpod2 15:33, 21 August 2006 (UTC)[reply]
Per comments about fluid dynamics, I think its clear that there really are two different, distinct notions of scaling that need to be addressed. The first is the classical notion of posing equations in dimensionless, scale covariant form. The other idea is that of scaling in the sense of the renormalization group (and random networks belong to this second form). I have never thought about what the "simplest possible example" of the second type of scaling is. By "simplest possible", I really mean "one which requires the least amount of specialist knowledge", "one which can be explained to a grad student who had never heard of it before". This rules out most quantum and stat mech examples, I was suggesting that maybe random nets are such a simple example, but really, I don't know enough to figure out how to create a credible, accurate example that is small enough to sketch out and still be comprehensible. linas 15:56, 21 August 2006 (UTC)[reply]
OK sure. I think we can leave the fluid mechs as it is---I just mean it to be an example of classical scale invariance. Of course not all classical field equations have a scale invariance in this sense, so giving examples is not trivial, I don't think. Agreed?
However, I agree that it would be nice to see some discussion of the renormaliztion group. Let us think about how best to phrase and organise it. If I understand what you mean correctly, essentially the kind of scale-invariance you are talking about is intimately related to statistical field theory and CFT, and my feeling is that the Ising model phase transition might be a nice example. perhaps calculating some anomalous dimensions and hence critical exponents, since this has a very close relation to practical experiments.
(This goes against your criterion of avoiding quantum and stat mech...OTOH, I am not an expert on random nets so would not be able to do any better than you would with that....)
I hope my discussion above made clear why I think the structure of the page should be close to what it is now. CFT and phase transitions certainly have to come after classical field theory, I believe.Do you agree?--Jpod2 17:26, 21 August 2006 (UTC)[reply]
Yes, and I never suggeted otherwise :-) linas 00:26, 22 August 2006 (UTC)[reply]
Excellent. --Jpod2 08:48, 22 August 2006 (UTC)[reply]

Population genetic too[edit]

I remember reading some papers in the mid-1980's coming from Russia which applied renormalization group techniques to population genetics. More precisely, they formulated the problem on a population of strings in N letters, and then allowed for the cross-breeding of pairs of strings. The cross-breeding is a "two-point interaction" between what would otherwise be a "free field" theory. I remember that the they recast the interaction in terms of matrices, but I hadn't yet heard of random matrix theory yet, so I don't know if it was related to that. They used the RG to look for phase transitions. linas 15:32, 21 August 2006 (UTC)[reply]

Well, as a quantum field theorist I guess I would say that all interacting systems can be described by QFT, at least at low energies ;)
But seriously, yes, that would be another worthwhile example, if you can dig out the reference. The RG has applications all over physics (and of course the fixed points of a given RG flow display scale invariance).
Perhaps our conclusion is that there needs to be a subsection on the renormalization group? Which is after all the language of scaling in physics. I will have to think about how best to organize such a section. --Jpod2 15:38, 21 August 2006 (UTC)[reply]

Fractals etc[edit]

Thanks for editing the section on self-similarity btw. However, I don't think you have made clear precisely why scaling exponents should be identified with fractal dimension. It wasn't my understanding that they should be, so perhaps you could explain this in more detail?

For example, if my statement of the self-similarity of the Koch curve is correct, this gives a scaling dimension (in the sense i have defined it, which is I think also the sense you mean it) of . I'm not sure why this should be related to the fractal dimension, log(4)/log(3).

Either my self-similarity relation is incorrect, or else your definition of scaling dimension is different from mine. As I said earlier, fractals are not my area of expertise, so perhaps i am missing what you mean. --Jpod2 15:44, 21 August 2006 (UTC)[reply]

I'm not sure what to do about it. The "classical" scaling you have is just the scaling of a curve drawn on a flat euclidean plane, and there's no particular magic there. The fractal scaling is the scaling of length to area, for appropriately and somewhat subtly defined notions of "length" and "area", see box-counting dimension for example. I suppose I'm stretching the truth by calling the fractal dimension of the Koch curve the "critical exponent" of the curve. So maybe fractals are an example of the two types of scaling -- a classical form, where the scaling exponents are necessarily integer, and are fixed to the dimensions of Euclidean space, and a more subtle form of scaling, where scaling exponents are typically fractional. But this isn't really enough to serve as the hoped-for "simple example" of the renorm group in action. Hmmm. :-( linas 16:11, 21 August 2006 (UTC)[reply]
Dear Linas, perhaps we should think about this a bit more. However, this is what I think at the moment...
Basically, that we should *not* identify the scaling exponent with the fractal dimension. I agree there may be no magic in the scaling exponent of the Koch curve, but it does explain in a precise way exactly what we mean by its self-similarity. I.e. if you scale the coordinates and the `size' of the curve by certain discrete amounts, the curve looks the same. That is a good and clear starting point.
Of course, we could go on to explain more about fractals and fractal dimension, though one could argue this belongs in the fractals page. However, I think the main thing is not to confuse people by identifying two different things.
So what to do about it? I suggest that one of us edits the self-similarity section so that those two concepts are not identified. If you think it is best to retain the links to fractal dimension, I would suggest adding some more material so that it is seen in context. I.e. state precisely the scale invariance you refer to when you invoke the different definitions of fractal dimension. However, since fractals are related to, but not fundamental to scale-invariance, it's possible you could just let people explore the link to fractals and the Koch curve if they want to. I leave it up to you, but I think it should be edited.--Jpod2 17:25, 21 August 2006 (UTC)[reply]
Perhaps you should introduce something like an enclosed area A(L) as a function of length of a fractal curve (I'm being vague but I'm sure the details can be made clear). Then something like will be true, right? That is the simplest way I think to cast the fractal dimension in the form of a scaling dimension. I expect you know more than me.
Its dangerous to try to define area naively. You might get away with it for the Koch curve, but you'll run into trouble elsewhere. The whole point of the box-counting dimension is that it defines the notion of area rigorously. linas 00:54, 22 August 2006 (UTC)[reply]
But, this is certainly quite different from the scaling dimension of the function in the usual sense, and shouldn't be identified with it. Both are important in different ways. And surely wikipedia is *not* the place to `stretch' definitions. best wishes--Jpod2 20:49, 21 August 2006 (UTC)[reply]
OK, but this misses the point. There is nothing at all interesting about the Koch curve or any fractal from the classical scaling point of view. Classical scaling "doesn't see" fractals. Here's an example. Instead of the Koch curve, consider the following curve, a "spikey bar chart" of sorts: at x=1, place a spikey triangle of height 1. At x=1/3, place a spike of height 1/3. At x=1/9 place a spike of height 1/9, etc. Now, if one were to scale the x axis by exactly a factor of 3, then one gets back exactly the same curve, with the heights scaled by a factor of three. This is essentially how the article currently describes the scaling of the Koch curve example.
Hi, whether it misses the point depends on the point one is trying to make. In this part of the article we are describing *self-similar* functions. Unless I am missing something, these functions need not be fractal! For example the logarithmic spiral, or your example above. The idea here surely is to give the definition of what self-similarity means, and give some examples of self-similar curves.
The point of retaining the statement of classical scaling is that this is what distinguishes self-similar functions from any other arbitrary function. That is why it is `interesting'. So I think it makes sense to define it, and give examples.
AFAIAC, the above is a good starting point for the section on self-similarity. Agreed? Perhaps after the log spiral example we should have a ===Fractals=== subsection, with one of the examples as the Koch curve. That might be the place to introduce links to the various different fractal dimensions. The A(L) suggestion above is vague, but it (or a refinement of it) captures the essence of the way you want to think of fractal dimension as a scaling dimension. Perhaps after the Koch curve example you could rephrase something like this, and then point people to box-counting dimension for more general cases.
As it stands you are confusing two separate issues the way you introduce fractal dimension. As I said, surely this isn't the place to stretch definitions. Therefore, I think the page should be edited to remove this confusion, perhaps with the reording I've suggested just above. What do you think? --Jpod2 08:36, 22 August 2006 (UTC)[reply]
Consider now the following: At x=1, place a spikey triangle of height 1. At x=1/3, place a spike of height 1/9. At x=1/9 place a spike of height 1/81, etc. Now, when I scale the x-axis by a factor of three, the spikes scale by a factor of nine. Gollee.
Consider now the following: At x=1, place a spikey triangle of height 1. At x=1/3, place a spike of height 1/3^d. At x=1/9 place a spike of height 1/9^d, etc. Now, when I scale the x-axis by a factor of three, the spikes scale by a factor of 3^d. for any real value of d. Gollee. I hope by now that you see how this is silly: All I'm really doing is describing the scaling of the curve y=x^d. Theres no rocket science in that. Both the Koch curve example, and the logarithmic spiral example, as currently written, are essentialy doing nothing other than this sort of pretty-darn-trivial scaling. A curve does not need to be fractal to scale like this, and scaling like this misses the features of what makes fractals special.
What I'm trying to say is that there's this other kind of scaling for fractals, and that this second kind of scaling is, in a certain sense, a very super-simplified variant of the scaling seen stat mech/QFT. linas 00:50, 22 August 2006 (UTC)[reply]
I agree once we mention fractals, it also makes sense to think abotu this other kind of scaling. And I'm not trying to say that fractals are not very closely associated with self-similarity. I just don't want the concepts of scaling dimension and fractal dimension to be introduced in a confusing way.
ALso, I think you'd need to make that final statement more rigorous before we put something like it on the page...are you identifying anomalous dimensions in CFT with fractal dimensions? I don't think this connection holds, or if it does it is far from the standard case in CFT. But maybe you mean a different kind of connection? all the best--Jpod2 08:36, 22 August 2006 (UTC)[reply]

Scaling in networks[edit]

Ahh... but I do, and here is how to make it rigorous. Consider a random network. One classic example is percolation of oil through fractured rock (the original Phys Rev Lett was water through toilet paper that had holes punched in it, I believe). At the smallest scale, a rock either has a crack in it, and oil flows at a certain rate, or it does not, and there's no flow. If I consider a slightly larger scale, there will be a network of cracks that may or may not hook up. The scaling question now becomes: As I increase the size of the volume considered, how does flow scale with the size? For solid rock, there is no flow. For lightly fractured rock, flow increases less than linearly with size. For highly fractured rock, flow scales quadratically (as the surface area of the cross section). I hope that this example makes clear that the scaling of the flow is related to one of the fractal dimensions. I don't know which off the top of my head, but I know this stuff has been theoretically and experimentally analyzed a zillion times.

There is a similar scaling analysis used for stress fractures. Consider a random network of electrical fuses. We try to put an electric current through the net, from one side to the other. As we dial up the current, some fuses may pop, at which point current redistributes around them. Things may be just fine with a few blown fuses. But keep increasing the current, and at some point, one gets a cascade failure as each fuse gets overloaded in turn. The result is the left and right hand hand sides become electrically disconnected. This is in fact a model of mechanical failure of a rod under tension. The scaling question is how to predict the critical point, the phase transition, as a function of the local connectivity of the fuses. Just as in the fractured rock, the fuses may have a very low or a very high connectivity, and this affects current flow.

A third network example is the ability of a pile of sand to support the weight of an object placed on the pile. Here, the network is the net of grains that touch one-another. See self-organized criticality for more about sand.

These are three network examples for which the ideas of fractal dimensions clearly apply, and have direct physical relevance. Furthermore, the second example has a very clear first-order phase transition. In fact, many of the ideas of traditional statistical mechanics can be directly applied to each of these cases -- and the literature applies them in great abundance: this was one of the big breakthroughs in materials science in the 1970's-1980's.

To connect these network problems to quantum requires a bit more work. Lattices such as the Ising model are regular not random; the randomness does not arise from the lattice structure. Instead, imagine the shape of a ferromagnetic domain, and the collection of such domains. The energy content of an Ising-type model is primarily bound up in the domain walls, rather than the bulk. These domains are "randomly" distributed, and are of "random" size. The scaling question becomes this: "what is the total area of the domain walls (or the total energy bound to the domain walls), as a function of the temperature or applied magnetic field?", Equivalently, what is the size distribution of the ferromagnetic domains as a function of the temp/B-field? The point here is that the domains are "randomly distributed", have random sizes, and have random connectivity, and applying things like the box-counting dimension, one gets critical exponents. This time, there are exponents not only vary as size, but also as temperature or B-field is varied. I hope you now see the analogy to the network scaling problems, and also why the network-scaling problems are simpler. For example, in the sand pile, the sizes of the sand grains stays the same, and only the connectivity varies. In the ferromagnetic models, the sizes of the ferromagnetic domains vary. In the percolation model, the connectivity is fixed. In this sense, the percolation or random resistor networks are the easiest to describe and analyze.

The connection between stat mech and quantum, I believe you know this. They both have analogous partition functions. The algebraic manipulations in each have a lot of similarity, and there's a lot of shared vocabulary. But now comes the hard part, the hand-waving part, the part that will irritate you and cause you to deny the whole thing as being wrong and crazy. So, take a deep breath, here goes: When students are taught the QFT partition function, they are instructed to visualize an infinite collection of little harmonic oscillators, one for each point in space. They are told to envision this vast infinite space, over which integration will be done. The space feels Euclidean-like: it seems homogeneous and flat. There is little or no talk of "measure" in the sense of measure theory. In measure theory, a measureable space is given a topology: namely, it is covered with a collection of open sets, and each set is given a size, its "measure". (see sigma algebra). All integrals ... the very notion of integration and integrability, depends on having a topology of measureable sets, and integration is performed with regards to these sets. For smooth, finite-dimensional spaces, manifolds, etc. measure theory gives the usual, expected results.

For infinite dimensional spaces things change. I started but never finished a measure-theoretic description of the partition function in the article Potts model. Not my invention; this is cribbed from a textbook. For the one dimensional Potts model (or any one-dimensional stat mech model), one has the unique ability to visualize the partition function. The sigma-algebra is a collection of cylinder sets. The magic is that one can enumerate all cylinder sets using the rationals (p-adics to be precise). One may then graph the partition function, or anything derived from the partition function, on the real number line! So one has a very direct yet perfectly mathematically accurate visualization of what the partition function looks like. I hope you need only one guess for what the shape might be.

For the case of the one-D Potts model, the graph of the partition function looks a lot like the derivative of the Minkowski question mark. For a while I thought it was exactly that, but its not, is slightly distorted. But it has very clear dyadic fractal symmetry. Visually, its just one of the De Rham curves. A more precise description in terms of dyadic fractals is pending. Ba-ding. That concludes the end of my lecture. I have to go to work now. linas 14:25, 22 August 2006 (UTC)[reply]

Dear Linas
You've put forward a lot of interesting material there, and a fair amount to digest. But let me say a couple of things for the moment:
(1) Do you agree that we need a simple explanation of self-similar functions and their scaling dimensions, before we discuss fractals?
Yes. I was told in high-school that the parabola is a scale-free function.
Excellent.
(2) Do you agree that we should not identify fractal dimension with scaling dimension in the way it is currently stated?
Hmm. Yes. No. I don't know. Mathematicians named it "fractal dimension" because it really does behave like a spatial dimension. The numerical value of this dimension is computed by analyzing the scaling behaviour. You can't really write about the concept without using the words "scaling" and "dimension".
Doesn't mean we can't distinguish their use in different contexts :)
I understand that you are arguing that sometimes physical quantities will scale with an exponent which is the fractal dimension of *some* fractal---that is totally uncontroversial, and I wasn't questioning it. But if you want to make this point, it needs to be explained in context, not just thrown in there and left for people to decipher.
Sorry. WP has 13 thousand math and physics articles, and this one was not on my to-do list.
no problem.
I believe the size of the jump you are making is illustrated by the length of the (partial) explanation you have just written above. Can we agree that that identification as it is currently stated has to be removed or rephrased? Perhaps as i suggested by having a fractals subsection or section following the basic description of selfsimilarity. We can then think about how to include the physical applications you are talking about here on the talk page, which are definitely interesting.--Jpod2 15:16, 22 August 2006 (UTC)[reply]
The hard fact is that there are two types of scaling that need to be discussed. The article mostly talks about the first type, which I mentally lump together with dimensional analysis. One might make the arugument that the first part of this article should be merged into the article on dimensional analysis. The other type of scaling physics is the scaling of stat mech and renorm group and related disciplines. It is a very different type of scaling. I am not trying to make a jump, I am trying to explain, in the simplest terms possible, what this "other" scaling is, and why its called "scaling". linas 16:34, 22 August 2006 (UTC)[reply]


quick reply---I'm not sure which bits you mean by `the first part', but scale invariance in classical field theory is not simply a question of dimensional analysis. some theories are scale-invariant, some are not. The same in quantum theory.
I think we should keep the fractal dimension refs, but it should be in a different section. If it's not on your to-do list I will change it when I get the chance... --Jpod2 16:45, 22 August 2006 (UTC)[reply]

Ising and Potts[edit]

The identification of fractal dimensions and anomalous dimensions in CFT is actually what I was asking you about; your Potts model example is interesting and I would like to know more about it. Are you saying that the partition functions for certain (1D) models when solved exactly, are fractal? Or are you still talking about the distribution of domains in these models, which is a different question altogether?

The partition function for the simplest case, the 1D ising model can be solved exactly, and has a fairly straightforward dependence on T, spin-spin coupling and applied field H. Nothing fractal there. Probably that is a special case, so I will have to remind myself about the general Potts model.

In any case, I certainly don't think that the canonical way to understand anomalous dimensions in a CFT is in terms of fractal dimensions of some fractal. Perhaps it is in some special cases.

Anyway, I think what I've said above in (1) and (2) needs to be addressed before we add more material.--Jpod2 15:30, 22 August 2006 (UTC)[reply]

I think we need to come to agreement that there are two different concepts of scaling, and that these should perhaps be discussed in two different articles.
Re: the Potts model: don't say "nothing fractal there" just because your stat mech textbook failed to say so. When you work with the scaling behaviour of fratals or random networks, you work with functions that are smooth, infinitely differentiable, and can often be expressed in terms of the classic special functions, polynomials, bessel functions, and whatnot. So sure, virtually all textbooks lull you into thinking that stat mech is smooth and differentiable and there's "nothing fractal there". And that's wrong. If you study just the renormalization group for the fractured rock of an oilfield, all you get is a very smooth differntial equation. Nothing fractal there. Don't be fooled by this smoothness into visualizing the rock as a smooth manifold.
Hi, no I said there was `nothing fractal there' because you specifically claimed the partition function was a fractal curve. This is not the case for the Ising model for Z(T,H,J). I think we can agree on that.
That's not to say there are not fractal patterns in the distribution of domains. I've been reading up on what I think you are referring to, this SLE stuff, and I agree there are connections between CFT and fractals. However, there are three important points
(1) These connections I believe only apply in certain cases, particularly 2d CFTs. It is not generic, though I'm willing and interested to be proved wrong on that.
(2) The connection is quite intricate, and so probably belongs in a section deep into the CFT page (A page perhaps we should work on at a later point)
(3) It is not the canonical way to understand a conformal field theory, certainly not for someone who is only just reading what scale invariance means. The basic meaning of a conformal field theory is a quantum field theory invariant under the conformal group. Whether there are other ways to understand specific CFTs is to me not something we should introduce early on in the scale invariance article.
The 1-D Potts model is utterly fractal at its core, and this can be explicitly visualized as follows. Consider the n-state Potts model. At each lattice location, the sytem can be in one of the states 0, 1, 2, ... ,n-1. The total state of the system can be completely described by an infinite string of digits. Take this string to be a base-n number: take it to be a real number, written in base-n. For every real number, there is a corresponding configuration of the 1-D Potts model, and vice versa. So, given a some real number, convert it to the state configuration, and compute the value of the partition function for that configuration.
I understand better what you mean, but I think you are misusing some language. The partition function in physics is typically meant to mean the sum over all states. As such it will be a function of the couplings in the theory only. You are talking about the contribution of each state to the partition function. A different beast altogether.
This is a simple, straightforward calculation for a computer. Now graph the value of the partition function along this real number line. It is a fractal. Period. Its fractal, just plain, undenaibly fractal. End of Story. (Almost). So, where does the smoothness come from? Well, when one calculates the expectation value of some operator, e.g. the mean-square magnetization, or the heat capacity, or whatever, one must integrate over *all* of the configuration space. One computes the heat capacity for each and every possible configuration, and totals them up. The result, when graphed as a function of the magnatic field or the temperatuure, is completely, totally and utterly smooth. The averaging over state space competely obscures the fact that the partition function on the state space is insanely fractal.
Sure, and this is what I took you to mean when you said `the partition function'. The sum over all states. i misunderstood what you meant, which made it sound a little crazy to me.
It is conserably harder to visualize the state space for the 2D and higher dimensional models. At best, one can take slices through this state space. When one takes slices through the state space, one sees a collection of "random" ferroelectric domains. Slice and dice any way you want, you will 99.999999% of the time see a "random" set of domains, and never see some nice geometric pattern. The partition function for the 2D case is also highly fractal, but its also higher-dimensional, and is not easily visualized in the 3D world we live in.
BTW, just to be clear, there is even a kind-of box-counting that appears naturally. Consider, for example, a ten-state potts model whose configuration is ...xxxx9345xxxxx.... Here, xxx is a "don't care" value, and when talking about the expectation value of some operator, the ...xxx... means "integrate over these states". So I'm interested in the expectation value of some oper where four locations in the middle have the values "9345", and I'm going to average over all other possible values for the x's. This configuration is known as a cylinder set, and the set of all cylinder sets provides a topology, or more formaly, a sigma algebra for the model. There is a natural measure associated with the cylinder sets, and specifically, the open set ..xxx9xxx is much larger than the set xxx93xxx which is much larger than xxx934xxx. As I insist on a certain configuration for certain nodes, I am picking out a smaller and smaller subset out of the total set of all possible configurations. The cylinder sets give me the sense of the measure or size of the sets needed to apply the concepts of measure theory. But having access to a collection of sets of different sizes also allows one to perform box-counting type fractal-dimension calculations: one can explore progressively smaller parts of the total phase space, and watch what happens to averages as one does so. The point here is that by applying a bit of mathematical rigor to the problem, one can in fact attach notions that come from concepts like fractal dimension to the statistical mechanics of lattice models. I beleive such notions can be made rigorous for the quantum cases as well, and I furthermore beleive that most of the scaling phenomena, including the renormalization group, can be entirely described in terms of the scaling of the fractal dimension of the underlying partition function on the phase space. I also beleive that this idea is not new, and has been explored in the mathematical physics literature, but I know not where. linas 17:11, 22 August 2006 (UTC)[reply]
All very interesting, though is it in part speculative? Do you think that all CFTs, in arbitrary dimension can be understood in terms of fractals? Perhaps you are right. I think I'll read some more about it now, but it is not something to insert early on in the scale invariance article, I don't think. Agreed?
As to asking me whether there are `two kinds' of scaling dimension in scale invariant theories. Well I would say there are classical scaling dimensions in a scale-invariant classical theory, and the (possibly) anomalous scaling dimensions of operators in a CFT. The latter may be related in certain cases to fractal dimensions, and the appearance of these fractal dimensions can probably be understood physically by thinking about other physical problems in the same universality class. I agree that CFT should be in a separate article (it is, but is currently weak).
You don't seem to have responded above to my comment above on dimensional analysis. If you think that all dimensionally-correct classical field theories are also scale-invariant, that is a serious misunderstanding. Explaining why not is the point of the first part of this article. So please do not merge with dimensional analysis. All the best--131.111.8.103 08:31, 23 August 2006 (UTC)[reply]
Ehh. Please note: I have never studied CFT. Nor have I ever studied random networks. I last studied QFT 20 years ago, in grad school. I now work as a computer programmer. What I write above is a recap of what I remember from grad-school colloquiums and lectures back then. A few years ago, I rediscovered an interest in mathematics, and have been writing in WP primarily about basic math topics. I have no particular interest in editing this article, although you've re-awakened my interest in the structure of phase space. I'll just say it again: I think there are two distinct types of scaling. In one, one has a very classical scaling of fields and dimensions, with scaling exponents that are integers or the occasional square-root that might sneak in. Then there's the other kind of scaling, originally discovered and discussed in the context of phase transitions in stat mech, but now seen to have a much broader applicability, and is known, by Fields medalists among others, as having something to do with fractals and fractal dimensions. I find the connection interesting, but its highly unlikely that I'll ever find myself in an employment situation where I'll be allowed to read on such topics. C'est la vie. linas 14:05, 23 August 2006 (UTC)[reply]

Reordering[edit]

OK, I have done what I suggested above. I think in a way you gave your tacit approval :) See what you think now. I have tried to explain that fractal dimensions *can* sometimes be critical exponents. This needs expansion, perhaps an explicit example.

ANyway, I think the ordering of the page is better now, though perhaps there will be more iterations. You could try to convince me that self-similarity/fractals can appear first, but we'd need to expand the stuff in fractals referring to physical processes (e.g. phase transitions and and critical exponents) since at the moment I think that is better understood in the CFT section.

I am reading more about the SLE approach to various 2d CFTs. I agree with your assertion that fractal dimensions in these cases can be related to the highest weight states in the CFT. I would not be shocked if similar things work for the 1d Potts model, though I would be kind of surprised if the same relations hold for CFTs in d>2. I see there is some literature in specific cases, particular by Fields medallists....

This is fascinating stuff, but I think we need to explain what CFTs are, and what fractals are (or at least introduce them both and point people to the right pages) first, and then perhaps introduce some of the connections between them. It would also be at home on the CFT and fractals pages, but in particular the CFT page needs a lot of work. If you want to we could make progress on that over the next few weeks---I will certainly start editing more on it when I get the chance, thoigh the thesis is pressing. All the best--Jpod2 09:27, 23 August 2006 (UTC)[reply]

Critique (moved from my talk page)[edit]

I think the scale invariance article is looking quite decent, now. See what you think if you have the chance?

Looks good. Minor criticism: It is usually preferred that WP articles avoid expository style (first person, active voice) Our first example is..., we have considered... and use an encyclopaedic style (third person, passive voice) An example is..., Another possibility is ....
Another minor point is that dimensional regularization could be mentioned. To make divergent integrals on QFT work, one pretends that one is working in 4+epsilon dimensions. How does the epsilon affect the scaling?
In the section on Ising models, there's a hint that eta might be something other than an integer, but no explicit statement one way or the other. There's a hint that it can be calculated .. but just to make things concrete, it would be nice to have an explicit value, so For example, for the corellation function, eta=0.2452345 (if you can pull some example value from somewhere).
The cosmology scaling example is actually an example of scaling of the power spectrum of noise. Here, white noise is scale invariant in that it has equal amounts of power at all frequencies. Brown noise is the noise of Brownian motion, and has a 1/f^2 scaling with frequency. Then there's "fractal noise" (Mandelbrot noise??) with 1/f scaling.
These "noise" examples remind me that one should distinguish random process (stochastic process) from fractals. Stochastic processes are "samples" or "snapshots" or "slices" taken of the total space of configurations: a given random net is just a snapshot of all possible nets. Given a set of such samples, one can define scaling by talking about the likelyhood of choosing a particular sample. This should be distinguished from fractals, which are specific samples that are also self-similar. That is, "noise" has scaling properties, but is technically not a fractal; it is only approximately self-similar. The "approximate" comes from the idea that "all random samples look alike", but that some random samples are more likely to occur than others.
The classical scaling results from statistical mechanics are about the likelyhood of sampling some particular random configuration out of the "canonical ensemble" of all configurations. My (so far unsupported) claim about the 1-D potts model (and more general models) is that while the "canonical ensemble" is the collection of all "random" field configurations, the expectation values of operators are self-similar as a function of these configurations ... they are precisely fractal, and not just "random".
I've scratched the surface of the SLE lit; superficially, SLE seems to be talking about scaling in the sense of noise, rather than in the sense of exact self-similarity of fractals. But I don't know, I got distracted in my reading. linas 16:37, 27 August 2006 (UTC)[reply]
thanks for that. i like most of your changes. i think you've persuaded me that the scale invariant functions go at the beginning.
i'm somewhat ambivalent about the stochastic and universality sections appearing *before* scale-invariant QFT. partly because i've tried to link classical FT and QFT sections quite closely, and those new sections split them up. perhaps those sections could appear after QFT, with some rewording? or after phase trnasitions? i'm not sure right now.--Jpod2 19:48, 27 August 2006 (UTC)[reply]
also, the computer vision seems out of place in the stochastic section, but perhaps you know more about that than me. Thanks for the pointers on style as well, and I will insert more about eta, perhaps its approximate value in D=3 and the exact result in D=2.
I think at the moment that the universality section quite easily could be moved to after QFT (where the RG is introduced) and perhaps even after phase transitions (as in a way universality extends and unifies the ideas introduced in that section). for example we can give examples of other theories in the same universality class as the Ising model, to illustrate precisely the kind of thing you are talking about. this to me makes more sense after we've introduced Ising as a canonical example of a PT--Jpod2 19:59, 27 August 2006 (UTC)[reply]
I see that you wanted to couple the field theory sections together. But I want to argue against this, for several reasons: 1) it is less technical 2) it is "universal", in that it is known to a lot of people who have only the dimmest idea (or maybe no idea) of what QFT or Ising models are.
I wrote the section on universality as an attempt to motivate why scale-invariant field theories are interesting topics of study. I wrote the section on stochastic processes to bridge across the classical and the quantum field theories: the crucial idea that one goes from particular field configurations that are solutions to the equations of motion, to averages over all field configurations. I suppose the section on universality could be shortened: I just copied all of it into the article on universality. linas 20:57, 27 August 2006 (UTC)[reply]
hi. OK, those are fair points. I will have to think about it further. I think the stochastic processes seciton as it stands doesn't make a huge connection with the QFT, though. Partly because I have not emphasised the partition function/path integral approach to quantum field theory in the QFT section, but also because I don't think the noise stuff is *that* closely related. The universality section I think is more of a bridge, because there as we know you can actually describe these processes with particular CFTs (e.g. the Ising model). I don't think there is such a direct connection between noise and CFT.
Hence perhaps these things don't need to split up the field theory sections? Plus I don't know if the computer vision belongs in there, that seems out of place.
On the other hand I agree that my preferred approach might scare people off to early. I wonder if we should discuss something about phase transitions and universality even earlier on (perhaps in a longer introductory section), and then at a more technical level after the Ising model example? I.e. something like `fluctuations at a phase transition are scale-invariant, many physical processes display same universal behaviour, described at a technical level by CFT etc' at the beginning. Then more later, after CFT and the ising model? I don't know.--Jpod2 21:54, 27 August 2006 (UTC)[reply]

Boltzmann distribution[edit]

I'm not sure this particularly belongs there in that context. Why does it? --Jpod2 21:58, 27 August 2006 (UTC)[reply]

new order of development[edit]

so what about a longer discussion at the beginning, introducing roughly four different areas of scale invariance

(1) scale-invariant functions in mathematics (2) scale-invariant probability distributions for random processes (3) scale-invariant classical field theories describing classical physics with no inherent length scale (4) scale invariance of the fluctuations in phase transitions and QFT. Introduce idea of universality, copy some of text from your universality section. introduce quantum field theory/stat mech connection

then more on each of these in the sections we already have in the order above, but (a) with computer vision and fractals removed from the stochastic section and (b) the remainder of universality discussed after phase transitions.

I guess I am essentially moving stochastic before classical field theory, because maybe it follows on from the discussion of scale-invariant functions, and there is not so much of a direct link with CFT. also splitting universality into the intro and outro, and the rest you have copied into the universality article.

how does that sound?

btw, I notice the ising model and potts model article don't seem to have much on the actual exponents, i might look into copying some of this stuff over--Jpod2 23:54, 27 August 2006 (UTC)[reply]

Domain of [edit]

Could we qualify the domain of ? I'm concerned about the dilation, .

Conr2286 03:19, 28 September 2007 (UTC)[reply]

Brief Appreciation Statement[edit]

Apologies for interrupting this thread, but... Thankyou for the very well written article. As a lay person without appreciable math I can read this article with some understanding. That is because non math narrative statements are succinct and very lucid and relatively jargon free or well linked. Complements to Jpod2 and linas on their exemplary communications, not only within the resultant article, but within their collaborations on this talk page as well. Again,Thanks

I think the first sentence is wrong.[edit]

"In physics, mathematics, statistics, and economics, scale invariance is a feature of objects or laws that does not change if scales of length, energy, or other variables, are multiplied by a common factor." is, as of Jan 7, 2015 the first sentence in this article. I have two issues with this sentence. First: describing scale invariance as what 'does not change' seems circular to me. Isn't it more correct to say scale invariance describes (or defines) a characteristic of (or feature of) objects or laws that doesn't change when the the scales of the independent variables change? That is, rather than describing IT as the characteristic that is invariant, isn't it be more correct that it describes that characteristic? Second, the definitions in the references require both the factor and its degree to be Real numbers. This should be made explicit, calling it a factor (coefficient) is a pretty weak constraint. Two other comments: the example of the invariance of a logarithmic spiral to scaling is both too complicated and too simple. It is too complicated in that I see no reason to use (1/b)ln(r/a) when ln(r) will do just as well. Why include two irrelevant parameters? Second, θ=ln(r) and θ=ln(λr) do not result in the same θ. Its obvious to anyone familiar with log functions that ln(λr)=ln(r)+ ln(λ)= ln(r)+ c. What needs more explanation, imho, is why the resulting curves are considered identical. I believe the statement "Allowing for rotations ..." doesn't sufficiently explain the point. How is it that if you "allow" rotations, then the fact that ln(r)=ln(λr)-c means θ is scale invariant?? (I also note that ln(r) and ln(λr) when λ is negative do not have any overlap in domains of r.) One other comment, it would be useful for two other simple examples to be included: one where the degree is fractional (irrational? transcendental?) and one with a positive integer coefficient, > 1.Abitslow (talk) 22:14, 7 January 2015 (UTC)[reply]