Talk:Fixed-point iteration

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

and [edit]

Why does satisfy the assumptions of the Banach fixed-point theorem, while does not? Both functions have no Lipschitz constant , unless one restricts the domain of , but then the statement "for any starting point " in combination with satisfying the assumptions of the Banach fixed-point theorem cannot be made anymore. However, those assumptions are always satisfied after the first iteration step because has a Lipschitz constant on . This phenomenon should be mentioned explicitly. — Preceding unsigned comment added by 213.22.81.218 (talk) 21:50, 14 September 2022 (UTC)[reply]

What examples should be given in this article?[edit]

The Numerical analysis article gives one example of fixed point iteration: . This article can accommodate several examples and can illustrate them with graphics (maybe animations too?).

The example in the NA article can be complemented here by comparing it with the the superficially similar iteration: .

The Fixed point (mathematics) article has a nice graphic of an iteration of sin(x) that --uhvpirate 04:55, 12 April 2007 (UTC)could be used to show that FPI can be used on many types of functions.[reply]

Examples of "what can go wrong" would also be nice.

Of course, examples are only part of the article.

--Jtir 14:13, 8 October 2006 (UTC)[reply]

I redirected this for now. Feel free to undo that if more example are added. Oleg Alexandrov (talk) 01:31, 9 October 2006 (UTC)[reply]
An additional nice example would be the calculations of the fixed points of the function and the stability of the iteration . I've written this example in the Spanish lemma of the fixed point lemma. I can provide the Python code too if you like it. Mlliarm (talk) 12:43, 1 November 2022 (UTC)[reply]

Intuitively to me (and thus likely wrong?), are not most diffusion equation solvers, the relaxation method for differential equations, and quantum computing all examples of fixed-point iteration? 104.187.53.82 (talk) 15:36, 29 October 2023 (UTC)[reply]

"fixed point iteration" is a technique in theoretical computer science[edit]

(copied from User talk:Oleg Alexandrov) --Jtir 06:17, 9 October 2006 (UTC)[reply]
Actually "fixed point iteration" is a technique in theoretical computer science: definition by recursion is regarded as solution of a fixed point problem g = F(g) and iterates of F converge to the fixed point. This technique has various flavors: an order theoretic one and a metric space one. --CSTAR 06:06, 9 October 2006 (UTC)[reply]

How are the iteration and the function related?[edit]

The third example has me wondering about the relation between the iteration and the function — why this function and not some other? Put another way, if someone gives you an iteration, can you say what function corresponds(?) to it?
This article is the place to explain the relationship between the two.

"... - this function is not continuous at x=0, and in fact has no fixed points."
I'm not sure what the dash in the third example means. Can it be replaced with the word "because"? --Jtir 18:05, 10 October 2006 (UTC)[reply]

The function f and the iteration rule are related by . The point of the third example is to show that the condition that f is continuous is a necessary condition; if f is not continuous then the iterations of f can converge to a value that is not a fixed point of f. Gandalf61 14:39, 11 October 2006 (UTC)[reply]

an example comparing FPI and Newton's method?[edit]

I tried computing the example xn+1 = sin xn on a handheld calculator. I was down to about 0.2 when I got tired of pressing the sin button. The Further properties section could give an example comparing the number of iterations needed to reach some value using FPI with the number needed using Newton's method. --Jtir 18:25, 10 October 2006 (UTC)[reply]

The Newton iteration is a fixed point iteration. Loisel 22:37, 10 October 2006 (UTC)[reply]

I took that distinction from the article, which reads:
  • "Fixed point iteration is not always the best method of computing fixed points. For example, Newton's method ..."
--Jtir 22:47, 10 October 2006 (UTC)[reply]

Well obviously the article is missing a point. Loisel 00:24, 11 October 2006 (UTC)[reply]

You are right. Newton's iteration is a fixed point iteration. I fixed that in the article. Oleg Alexandrov (talk) 03:27, 11 October 2006 (UTC)[reply]
Sorry, I think I overwrote your text when I got an edit conflict. I had written up a bunch of stuff which seemed to moot your changes. Loisel 04:27, 11 October 2006 (UTC)[reply]

example of the Babylonian method[edit]

Thanks for adding the Babylonian method to the examples. I noticed that it is the only example that does not give the iteration. --Jtir 17:03, 12 October 2006 (UTC)[reply]

complex fixed points?[edit]

not sure if this is the right place to ask this but here goes anyway: obviously functions that don't cross f(x)=x shouldn't have fixed points, like ln(x). But you can use fixed point iteration on ln(x) to obtain an attractive complex fixed point. Does anyone have any information on complex fixed points?--uhvpirate 04:55, 12 April 2007 (UTC)[reply]

@uhvpirate: I think these notes contain a great deal ! Mlliarm (talk) 13:21, 1 November 2022 (UTC)[reply]

Toilet flushing[edit]

I had some diarrhea today and when i flushed i wondered: Can the question of whether the water spirals clockwise or counter-clockwise finally be answered by using a fixed-point problem? —Preceding unsigned comment added by 130.85.237.88 (talk) 11:55, 14 November 2007 (UTC)[reply]

> Can the question of whether the water spirals clockwise or counter-clockwise finally be answered by using a fixed-point problem?
This has to do with the Coriolis pseudo-force. Don't know if it's related to fixed points. Mlliarm (talk) 20:50, 31 October 2022 (UTC)[reply]

Methodological Error[edit]

I have updated the "Fixed Point Iteration" page with "Methodological Error" section. I have added some points which would help to know if the iteration would converge or diverge.When we calculate the exact root of a given function we could plug in the root value into g'(r). After plugging in the value we run a check if the calculated value '<1 ' or '>1'. If |g'(r)| < 1 then the scheme converges else it is diverging. If there is divergence then the iteration should be terminated. Pooja929 21:09, 07 October 2010

I have removed the new section because it was mostly cut-and-pasted from the source; it was not consistent with the notation or flow of the rest of the article; and it was also unenecylopedic in tone - Wikipedia strongly discourages use of the second person ("we", "us") in articles, see WP:TONE. Also, I think there is a logical error in that section - the method it proposes for determining the convergence of an iterative process depends on knowing a root of the iterated function, which can in turn only be found if the iterative process converges. So there is some circular reasoning here.Gandalf61 (talk) 08:02, 8 October 2010 (UTC)[reply]

Sample Code[edit]

A simple Matlab code has been added to the page. PoojaP 10:02 A.M, 07 November 2010

When will the fixed-point iteration converge?[edit]

This article gives some examples about the fixed-point iteration converges and diverges respectively. But it does not state specifically about when the iteration would converge (thus it is the one we should use) and when the iteration diverges (so it won't give us the fixed-point). Of course there are a lot of fixed-point theorems that we can use to tell if it exists, but for the fixed-point iteration, since the function has to be continuous (mentioned in example 4), I think we can add the section about how to tell the iteration converges into this article. I noticed in the Properties section there is a theorem that we can use: If a function defined on the real line with real values is Lipschitz continuous with Lipschitz constant , then this function has precisely one fixed point, and the fixed-point iteration converges towards that fixed point for any initial guess .

I suggest to add this paragraph:

Not all iterations can arrive at a convergent fixed-point. Therefore, when constructing a fixed-point iteration, it is very important to make sure it converges. There are several fixed-point theorems to guarantee the existence of the fixed point, but since the iteration function is continuous, we can usually use the following theorem to test if an iteration converges or not: If a function defined on the real line with real values is Lipschitz continuous with Lipschitz constant , then this function has precisely one fixed point, and the fixed-point iteration converges towards that fixed point for any initial guess . One can generalize this to any metric space.

Proof of this theorem:

Since is Lipschitz continuous with Lipschitz constant , then for the sequence , we have:

,

,

,

and

.

Combining the above inequalities yields:

.

Since , as .

Therefore, we can show is a Cauchy sequence and it converges to a point . This is apparently the fixed point for . XueGong (talk) 01:26, 17 September 2012 (UTC)[reply]

Your suggestions seem good and it looks like they have been implemented. As someone currently learning this method, I am confused that the fixed point is labeled x_0 and the initial starting point is also labeled x_0 indicating that the fixed point and the starting point are the same number, but this is clearly not true. Is it possible if the initial guess is labeled as x_0 that the fixed point not be labeled also as x_0? Harlananelson (talk) 01:00, 23 October 2022 (UTC)[reply]
You are right, initial value and fixpoint are usually distinct, and should have distinct names. I'm going to fix this. Thanks for noticing! - Jochen Burghardt (talk) 09:33, 23 October 2022 (UTC)[reply]
 Done I used "" to denote fixpoints throughout the article (I hope, I got every occurrence - ?). - Jochen Burghardt (talk) 09:55, 23 October 2022 (UTC)[reply]

Cambridge University comments[edit]

Cambridge University's Computational Projects IB manual (2014/15) says at the bottom of page 31: "There are also [numerical analysis] articles in Wikipedia, but that on fixed-point iteration is in need of some improvement as of July 2013." It does not seem to have changed much since then. --Rumping (talk) 00:03, 13 January 2015 (UTC)[reply]

Sternberg's book[edit]

Added "Dynamical Systems" in the resources. The first chapter ("Iteration and fixed points") is a really nice intro in fixed point iteration and a great study of the stability of Newton-Raphson algorithm. Mlliarm (talk) 21:05, 31 October 2022 (UTC)[reply]

Shashkin's book[edit]

Added in the bibliography the excellent introductory book of Yu. A. Shashkin. Mlliarm (talk) 12:43, 1 November 2022 (UTC)[reply]

India Education Program course assignment[edit]

This article was the subject of an educational assignment supported by Wikipedia Ambassadors through the India Education Program.

The above message was substituted from {{IEP assignment}} by PrimeBOT (talk) on 20:03, 1 February 2023 (UTC)[reply]