Talk:Ornstein–Uhlenbeck process

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

COMMENTS 1[edit]

I don't think the AR(1) process is the discret analog to the Ornstein-Uhlenbeck process, as it has no mean reverting characteristics. 212.39.192.3 (talk) 10:14, 30 December 2009 (UTC)[reply]


Can someone clarify how in the equation r_t goes from being in the integrand of a random variable to having a set value? I found this to be a bit confusing.
—Preceding unsigned comment added by 209.6.49.37 (talk) 17:53, 23 November 2009 (UTC)[reply]

The definition of the Ornstein-Uhlenbeck is not correct. The Ornstein-Uhlenbeck process does not have to be mean reverting, it can have a general drift process, i.e.

However, it is possible to define mean-reverting Ornstein-Uhlenbeck processes like

Usually an Ornstein-Uhlenbeck process refers to processes, where the process itself does not appear in the stochastic part of the SDE which describes the dynamics of the process, i.e. σ does not depend on r.

I disagree. The simplest definition of an O-U process is given by which is mean reverting to the zero, (did you have a typo in your definition of the OU process?). The definition in the article is just a more general case of the OU process than we are normally used to seeing. If you want, just set .
See also http://pauillac.inria.fr/algo/csolve/ou.pdf. If you think this is incorrect, could please you provide a better reference?

How about Stochastic Differential Equations by Bernt Oksendal? There O-U and mean-reverting O-U processes are distinguished. Your point above about setting \mu=0 is valid but it makes pedagogic sense to introduce the \mu=0 case first I think —Preceding unsigned comment added by 131.111.16.20 (talk) 08:57, 13 April 2009 (UTC)[reply]


Covariance function for this process looks strange and quite different from earlier versions of this article, released last summer in particular.

Indeed, when |s-t| -> oo, cov(rs, rt) should tend to 0, as the autocorrelation effect decreases with time interval.


The article fails to mention what is!

Indeed. I added that refers to the Wiener process. -- Jitse Niesen (talk) 05:49, 26 June 2006 (UTC)[reply]

The text below the graph[edit]

The text below the graph should read to be consistent with the legend on the actual graph itself. I made this correction on Aug. 30, 2006 (ko4seki)

The text below the graph gives values for the parameter but fails to mention what it is. Tim (talk) 07:43, 7 January 2008 (UTC)[reply]
Does it really make sense to write a=0 (a.s.) for a simulated trajectory? Since it's a simulation you know the exact value of a? —Preceding unsigned comment added by 85.24.189.238 (talk) 16:06, 29 December 2010 (UTC)[reply]
Also it is nonsense to label a single trajectory as "normally distributed". A single trajectory has a single starting point. An ensemble of trajectories has a distribution. — Preceding unsigned comment added by 199.89.103.11 (talk) 12:06, 17 August 2012 (UTC)[reply]

The whole point is that the "disturbance" that drives the trajectory has a distribution. That distribution could be uniform, two-sided exponential, two-sided triangular, Gaussian, or whatever. Please do not give us anything about "ensembles of trajectories has a distribution." That is beside the point.
Since the process is Markov, you can pick out any point along the curve, anywhere, and then you have no way to use the past to predict the the future of the curve. That is the whole idea of a Markov process.
The only things that you know anything about are 1) the location of the point that you chose, and 2) the distribution of the disturbance that drives the curve along.
98.67.108.12 (talk) 01:47, 26 August 2012 (UTC)[reply]

The O-U process is also the same as first-order, low-pass filtered white noise.[edit]

Just as the Wiener process is integrated white noise (non-leaky integrator), the Ornstein-Uhlenbeck process is RC low-pass filtered (a.k.a. "leaky integrator") white noise. This is electrical engineering language, but should somehow be included, right?
71.254.8.148 (talk) 04:29, 31 March 2009 (UTC)[reply]

It probably should be included.
Michael Hardy (talk) 17:42, 13 April 2009 (UTC)[reply]

Make article accessible to physicists, engineers, etc.[edit]

Dear mathematicians who wrote this article. As a mathematician, I appreciate your work. But I am also a physicist and have to say that this article is hard to understand for specialists from other sciences such as physics and engineering who also use OUPs. (We do no have to try to make this article accessible to just everybody, but we really should include the other specialists' viewpoints).

I have added a section on application of OUP in physics. This should give a start. Improvements are most welcome. If someone could also write a section on OUPs in engineering and signal processing (see above) that would be great.

So that's the new section:

Application in physical sciences: The OUP is a prototype of a noisy relaxation process. Consider for example a Hookean spring with spring constant whose dynamics is highly overdamped with friction coefficient . In the presence of thermal fluctuations with temperature , the length of the spring will fluctuate stochastically around the spring rest length ; its stochastic dynamic is described by an OUP with , , . In physical sciences, the stochastic differential equation of an OUP is rewritten as a Langevin equation

where is Gaussian white noise with .

Benjamin.friedrich (talk) 20:14, 25 April 2010 (UTC)[reply]

STATIONARY 1[edit]

This is supposed to be a stationary process. At the very least, that should mean that the probability distributions of xt and xs should be the same. I had imagined that the covariance between values of the process at different times would depend on those times only through how far apart those times were, i.e. it would depend on s and t only through |s − t|. But the article says

for s < t, and I am suspicious. Should I be?

(OK, next I'll try working through the details myself and maybe figure out what I'm missing.) Michael Hardy (talk) 00:46, 26 April 2010 (UTC)[reply]

The confusion arises because, in this part of the article, the process is not being treated as stationary. The workings-out in this section assume that the process starts from a value x0 at time zero. It would be good if something were done to be clearer about this. 193.62.153.194 (talk) 11:25, 28 April 2010 (UTC)[reply]
I see your point. The non-stationary part decays exponentially. Maybe there's something to be said for thinking about that, but the covariance formula for the stationary process has an appealing simplicity that helps one understand and remember what this process is about. Michael Hardy (talk) 01:48, 30 April 2010 (UTC)[reply]
This issue is very confusing. The second sentence says the process is stationary, but the definition given is of a non-stationary process. I suggest "stationary" -> "asymptotically stationary" to resolve this. Any thoughts? 128.243.253.117 (talk) 17:35, 26 July 2011 (UTC)[reply]
I suggest "Not stationary" as a reasonable approximation to the truth. — Preceding unsigned comment added by David in oregon (talkcontribs) 05:28, 25 April 2012 (UTC)[reply]
I wouldn't be so hasty, it appears from googling that it can be stationary or non-stationary. Probably better to expand on this. IRWolfie- (talk) 08:07, 25 April 2012 (UTC)[reply]

STATIONARY 2[edit]

The presentation in the Wikipedia is very strange. A stationary process MUST have a mean value that is a constant, but the one given there is a function of time. There is no allowance for hand-waving that says, "Oh, the time-varying part decays away.

Furthermore, the autocorrelation function MUST depend only on the time difference (it is shift-invariant) and the one here and now is a function of time. THIS IS ABSOLUTELY FORBIDDEN. Electrical engineers call this the "autocorrelation function", so ignore any business about "autocovariance".

For a stochastic process that is Markov, Gaussian, and stationary, the autocorrelation function MUST work out to be Kexp[-p|t - s|], where K and p are given constants, and t and s are two instants in time. This is the only way that it can work out.

Furthermore, the power spectral density should exist in this case and it MUST not be a function of time. That power spectral density is found by the Wiener-Khinchine theorem by taking the Fourier transform of the autocorrelation function.

Some mathematicians, etc., might not give a hoot about the power spectral density, but many engineers, physicists, and chemists, too. Besides electrical engineers, many chemical engineers, mechanical engineers, and nuclear engineers care very much about spectral densities. Note that I didn't write "most", but "many". Many physicists do, too.
98.67.108.12 (talk) 01:32, 26 August 2012 (UTC)[reply]

Other strange statements:

"Over time, the process tends to drift towards its long-term mean: such a process is called mean-reverting."
Stationary stochastic processes do not do this. They are just as likely to drift away from their mean values as they are to drift toward them. In fact, if a stationary stochastic process ever reaches its mean value, it is just as likely to "keep right on going" and drift away from its mean value in the opposite direction. There are no "magic pieces of string" that tells them which way to go. It is just that in the long run, they tend to spend about as much time above their mean values as below them.
On the other hand, nonstationary processes can be different. However, this article says that it is all about stationary processes.
98.67.108.12 (talk) 02:19, 26 August 2012 (UTC)[reply]

External Links[edit]

Hi there,

Both links from M.A. Van den Berg are dead, can someone please post up a live link as I'd really really like to see them especially Calibrating the Ornstein-Uhlenbeck model.

thanks —Preceding unsigned comment added by 138.40.207.93 (talk) 15:06, 1 April 2011 (UTC)[reply]

Alternative representation[edit]

I have undone the previous edit to the "Alternative representation" section. I believe the modification was incorrect. At least the last modification was inconsistent with the previous content and that had been up there for some time.

On the other hand I do admit the expression in question could be improved. KeithWM (talk) 12:28, 11 June 2012 (UTC)[reply]

Some people have difficulty in subdividing[edit]

Some people have difficulty in subdividing:
Yes, they cannot divide things up like this:
1. The stationary Ornstein-Uhlenbeck process, and
2. Nonstationary Ornstein-Uhlenbeck processes.

These are what we need in this article.
What we do not need is a bunch of statements about stationarity, then a bunch of equations about something that is OBVIOUSLY nonstationary.
It is amazing that anyone would even attempt to do this.
98.67.108.12 (talk) 02:02, 26 August 2012 (UTC)[reply]

The process has stationary independent _increments_, just like brownian motion and all other diffusions. That is to say X(t+s) - X(s) has a distribution that is independent of s and X(t_2) - X(t_1) is statistically independent of X(t_1) - X(t_0). Perhaps that is were the confusion originally arose. 108.28.189.95 (talk) 22:23, 6 February 2024 (UTC)[reply]

Alternative definitions[edit]

Gardiner seems to use a different diffusion constant (factor 2) from the text (section on Fokker Planck). I don't have Risken here. Can someone check everything is fine? If there are really different definitions used, we should probably mention it. — Preceding unsigned comment added by 128.243.2.30 (talk) 13:10, 13 December 2022 (UTC)[reply]

Physical units[edit]

I would appreciate someone helped specifying the physical units of the parameters and variables of the SDE. If i ask chatgpt or the like, i get an answer that has two answers to the questions and they contradict each other. Bodait (talk) 02:25, 21 February 2024 (UTC)[reply]