Wikipedia:Reference desk/Archives/Mathematics/2007 May 20

From Wikipedia, the free encyclopedia
Mathematics desk
< May 19 << Apr | May | Jun >> May 21 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 20[edit]

Implicit equations and distance[edit]

Given a curve (or surface) defined implicitly by f(x,y)=0 (or f(x,y,z)=0) and a point P(xP,yP) (or P(xP,yP,zP) ), what does f(xP,yP) (or f(xP,yP,zP) ) correspond to ?

If the curve is a line, the equation is ax+by+c=0 and axP+byP + c corresponds to a certain distance from the point to the line (as is the (perpendicular) distance from the point to the plane).

What generalisation is possible, if any ? --Xedi 19:45, 20 May 2007 (UTC)[reply]

f needn't mean anything very much, in general. For example, take your line ax+by+c=0. This can be rewritten as e^(ax+by+c)-1=0, but now f(x,y)=e^(ax+by+c)-1 means nothing in particular. The most useful thing I can think of is that (as long as f is well-behaved), the gradient of f is a vector orthogonal to the surface f=0. For example, taking f(x,y,z)=x2+y2+z2-1, we get f=0 is the unit sphere, and is (2x,2y,2z), which for (x,y,z) on the sphere is a vector pointing directly outwards. Algebraist 20:33, 20 May 2007 (UTC)[reply]
Well, yes, one just has to study the properties of f to grasp what it describes. Indeed taking the gradient does help, I'm sure many other things are also possible (if possible getting one variable in function of another, etc...). But what I precisely wanted to know was the signification of the number f(xP,yP). Does your answer actually mean there isn't any ? I understand, as you said, f can be about anything and many functions will describe the same surface (f and ef-1 and ln(f+1) (under certain assumptions for f)). Obviously, even in the worst cases, f(xP,yP) has to represent some sort of distance, even if not well defined, as if f(xP,yP) is relatively small, the point P must be relatively near the surface (of course, the nearness varies according to f, 2x+3y-4 will not give the same than 200x+300x-400 but the distance stays the same). Thanks --Xedi 20:56, 20 May 2007 (UTC)[reply]
Under certain assumptions, primarily being small enough, is roughly equal to the distance of from the hypersurface. I doubt much more than that can be said (of course, you could try find better approximations for the distance using higher order derivatives of f, but this quickly becomes unwieldy). -- Meni Rosenfeld (talk) 21:23, 20 May 2007 (UTC)[reply]
1) Yes, I think I understand, that's under the assumption that the second derivatives are 0 ? So what would be the approximations using higher derivatives ?
Thanks. --Xedi 21:45, 20 May 2007 (UTC)[reply]
In fact, it is not enough for to be small. We need to actually be close to the hypersurface, which is not implied by the former. For example, take . This gives a circle, but there are points arbitrarily distant from the circle with arbitrarily small f value. -- Meni Rosenfeld (talk) 21:30, 20 May 2007 (UTC)[reply]
2) But that only results of the fact that it's an approximation that may go wrong as P gets further away from the surface. No ? --Xedi 21:45, 20 May 2007 (UTC)[reply]
You can draw graphs of the level sets of f(x,y)=0. For instance take the circle f(x,y)=x^2+y^2-1. The the set f(x,y)=0 is the circle you first though of, f(x,y)=1 will be a bigger circle and f(x,y)=-0.5 is a smaller circle f(x,y)=-1 a single point and f(x,y)=-2 the empty set. A more interesting example is f(x,y)=x^2-y^2, f(x,y)=0 gives two lines crossing f(x,y)=1 and f(x,y)=-1 give two non interseting lines. Lots of fun can be had by examining other functions and this sort of analysis is the basis of a lot of Singularity theory.
One way to get a feel for the functions is to treat it as a graph let z=f(x,y) and plot the points (x,y,z). The implicit curve will be the intersection of this surface with the x-y plane. Its worth doing in the 2D case which might help you understand whats happening in general. The 3D case might prove tricky. --Salix alba (talk) 21:33, 20 May 2007 (UTC)[reply]
3) Yes, I'll try that, definately. --Xedi 21:45, 20 May 2007 (UTC)[reply]
(after double ec)Meni Rosenfeld: I was going to point out I have a polynomial counterexampling your claim in 1D. Indeed, mine has f/f' tiny a long way from the hypersurface (=point), as well as f. Algebraist 21:38, 20 May 2007 (UTC)[reply]
To Algebraist: Sorry for the edit conflicts :) Anyway, note that I had initially required a small f. A polynomial would not be a counterexample, since even if it did have tiny at distant points (which I don't see how is possible), f would still be large, a case I didn't aspire to deal with.
To Xedi: I have taken the liberty to enumerate your new questions.
1) Yes, sort of, if the second derivatives are 0 everywhere (not just at ) then this will be exact, as you have pointed out above. The way to use higher order derivatives is to construct a power series expansion for f around , treat it as if it is exactly f, and solve the equation you get by equating it to 0. There will be in general (infinitely) many solutions - the one with the smallest distance would be a decent approximation under the right assumptions.
2) Yes, the approximation gets worse as P gets further away from the hypersurface - I tried to emphasize that we need P to be close for the approximation to work, but we can't deduce that P is close from the mere fact that is small.
-- Meni Rosenfeld (talk) 22:00, 20 May 2007 (UTC)[reply]
2) Oh yes, all right. Seems quite obvious, then, because we have to divide by something that isn't everywhere equal. So obviously some points where f(x,y) is arbitrarily small might not be near because at that point the denominator is very small too.
Could you just give an example for what it would be with second derivatives if it's not too complicated and long to do ?
Thanks again. --Xedi 22:11, 20 May 2007 (UTC)[reply]
Apologies for wasting the OP's time with this, but one can find a poly f with the following properties (say): the only zeros of f in [-10,10] are near 0, f(5) is very small and positive, and there are points near 5 where f is very small but f' is very large. Proof: take a suitable continuous function, apply the Weierstrass approximation theorem. Algebraist 22:18, 20 May 2007 (UTC)[reply]
You're not wasting any time here ! Anyway, yes well as long as there is a suitable analytic function then there's a polynomial anyways.
But all this just depends on the fact that the approximation using only the first derivative isn't suitable for all functions. I also thought Meni Rosenfeld's principles for saying the approximation works quite well were a bit arbitrary, but I'm not really able to give any good principles anyway. --Xedi 22:24, 20 May 2007 (UTC)[reply]
Algebraist: Yes, you can find a polynomial with small f at distant points, but (for a fixed polynomial) not arbitrarily small f or aribtrarily distant points. For f a polynomial, it is still true that if f(P) is small enough, the point is close to the surface.
Xedi: Sorry, working out anything with second derivatives would be too much for me to write here. You're welcome to try, but I do believe it would be a waste of your time (much work for no obvious benefit). Don't get it wrong - this also will require the point to be close, but will just give a sharper estimate if the point is indeed close.
I'm not sure what you mean by "as long as there is a suitable analytic function then there's a polynomial" (is there a nonconstant periodic polynomial I'm not aware of, or a nonzero one which converges to 0 at infinity?), or by your last comment. -- Meni Rosenfeld (talk) 23:27, 20 May 2007 (UTC)[reply]
Ok, I see, so it's not really possible.
About the polynomial, I just really meant to say that a function as Algebraist described could well be a polynomial without even having to resort to Weierstrass's approximation theorem. Then I wasn't really talking about behaviour at infinity. I wasn't really meant to be anything of importance, just like saying that it appears quite logical that such a thing is possible. Then yes, Algebraist and I weren't mentioning arbitrarily small values for a fixed polynomial. --Xedi 17:06, 21 May 2007 (UTC)[reply]
A suprising amount can be done with interval analysis, especially if you know not only f at a point but some or all of its derivatives. This information can give bounds on the range of values the function can take. Techiques like Lipschitz continuity can used to give bounds on the posible values within a certain distance.
If your working with polynomials you can convert these to Bernstein polynomials which have a nice convexity property: if the coefficients are all positive, then the function will be strictly positive in a given range. I've exploited these to a large extent in an program I wrote for polygonizing implicit surfaces.
These techniques are often better at negative questions, where does the function not have zeros, than saying when it does have zeros.
In 1D Sturm's theorem can be used to tell you how many roots (zeros) a function has. This has been exploited by ray-tracing algorithms. --Salix alba (talk) 23:31, 20 May 2007 (UTC)[reply]
Meni Rosenfeld: apologies, I had misunderstood the order of your quantifiers (this is what comes of not communicating in first order predicate calculus at all times). Algebraist 23:54, 20 May 2007 (UTC)[reply]
Shall we try a visual metaphor? The real-valued function f(x,y) has a natural interpretation as a height at a point of the xy plane. The curve it implicitly defines is the shoreline at sea level. Physical terrain is, we expect, continuous; mathematical terrain need not be. Unless we place special constraints on the shape of the land, the height at a given point tells us nothing; it certainly need not predict the distance to the shoreline, nor the direction. Even if we are lucky enough to be able to measure the slope of the land (the gradient of f), we may still have difficulty finding the shoreline. This is a very practical challenge for algorithms that are meant to find zeros of functions, whether arbitrary functions or simply polynomials. We have much the same difficulty if we wish to find a mountaintop rather than a shoreline (that is, an optimization problem). --KSmrqT 05:43, 21 May 2007 (UTC)[reply]
Nice. This can be considered another way, if you are on a mountain top and know a limit on the maximum slope, you can say that the shoreline cannot be within a certain distance. This can be useful for algorithms as eliminating the set of possible values reduces the search space. --Salix alba (talk) 11:35, 21 May 2007 (UTC)[reply]
Yes, well it now appears obvious that the function can really be anything - even if we assume continuity it still stays quite unbearable. Obviously, yes, f(x,y) won't really tell us much from the distance to the nearest zero, at best an indication. Same goes for the derivatives.
So this also means there can't be any reasonably simple way to know how far the function is from a zero only knowing f(x,y) and some derivatives. Then, yes, the only way to go would be the other way - for example, getting a formula giving the distance to the curve (f(x,y)=0) depending on x and y (minimizing the distance between the point and the curve).
So just for this last thing, how would this optimization problem be solved (analytically), other than doing it point per point ? Are conditions useful for this to be much simpler (without alienating the problem too much), like continuity or derivability ?
Thanks so much for all this insight. --Xedi 17:06, 21 May 2007 (UTC)[reply]
Differentiability is a big help. Working with polynomials, that is an algebraic varity does make things very much simpler, as d^nf/dx^n will be constant when n=deg. In 1D if you know the first derivative is strictly positive, then a simple sign check on the end points can tell you whether a zero exists or not. Similarly if you know second derivative is strictly positive, then the first derivative can only have one zero, and the function can have at most two zeros. This leeds to something along the lines of Descartes' rule of signs.
In 2 or more dimensions you can play with critical sets and singular points, that is the curves where df/dx or df/dy=0 and points where both vanish. These divide up the domain into sections where things behave nicely.
Again if you have polynomials there is some hefty computer algebra you can rely on. I've seen a technique, which uses CA to factor polynomials, this of course cannot be done over the reals but you can use other rings where it is possible.
There is a big litrature on this problem, particle systems are a nice method. This basically means take a set of points in your domain, let them move along the gradient vectors, eventually they will either leave the domain or arrive on the surface. This is sort of a Newton's method in higher dimensions. --Salix alba (talk) 18:14, 21 May 2007 (UTC)[reply]
Sorry, but I don't see how this is really relevant here. I'm not actually looking for the zeroes but more for a certain distance to the curve. Effectively the methods you give are an aid to find the zeroes of the function (following the gradient, like, as you said, in Newton's method). Still, thanks for the explanation, it's definitely very important to be able to find out the zeroes of functions nowadays. Maybe I'm just too used to functions of the type y=f(x) or z=f(x,y).
What I was more curious about was what it is possible to do with f(x,y) (for example, finding distance to a zero). So, yes, I realize not much can be done, because the function can behave in too many different ways to be able to draw conclusions from the value of f(x,y). And now I'm just asking, as f(x,y) won't, in general, give any kind of distance, even though Meni Rosenfeld's approximation works well in many cases (and does give an exact solution when it's a plane), is it possible to assign a "distance" value to each point easily ? For example (extremely simple case), with xy=0, the distance would be max(x,y). Because all I'm able to do is to calculate for each point (one at a time) the distance by an optimization problem (finding the minimum length between the point and the curve f(x,y)=0) (and in simple cases too).
I suppose the question is, as f(x,y) can't really define distance from the curve, what does, if anything ? Thank you for your time. --Xedi 19:47, 21 May 2007 (UTC)[reply]
Given a curve defined implicitly by f(x,y) = 0, the distance of a point P = (xP,yP) to the curve is the infimum of ||(x,y) − (xP,yP)|| for (x,y) ranging over all points satisfying f(x,y) = 0, and similarly for surfaces. Determining this in general is just as hard as general optimization; for specific functions f you might be lucky and find a simple answer, but not in general.  --LambiamTalk 21:34, 21 May 2007 (UTC)[reply]
Yes, that's what I meant, there's no proper way of finding a general expression for the distance in most cases as we would have to find the minimum for an infinity of points. Thanks for the precise answer. --Xedi 21:51, 21 May 2007 (UTC)[reply]
For a point on the curve to be a minimum of distance to P it requires that the normal at that point is in the direction of P, you can produce a formula g(x,y) which will be zero when the gradient is in direction of P. It then becomes a problem of solving the pair of equations f(x,y)=0, g(x,y)=0, which may be possible to do algebraically. Think of a circle centered P which is tangent to the curve. --Salix alba (talk) 22:52, 21 May 2007 (UTC)[reply]