Wikipedia:Reference desk/Archives/Mathematics/2009 October 5

From Wikipedia, the free encyclopedia
Mathematics desk
< October 4 << Sep | October | Nov >> October 6 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 5[edit]

Odds ratio vs. relative risk[edit]

To begin, all who have contributed have been a great help.

I've asked a few days ago about the difference between odds and probability -- this is sort of the same idea, but with an example. One could either look here or use another example, but my question is: what is the difference in reporting relative risk (probability) vs. odds? In the example I linked to above, the relative risk is 4.5 times as likely while the odds ratio is 36 times the odds. What is that supposed to mean to someone? I mean, to most people (I assume, because it seems this way to me), if you would ask me to choose between the two, I'd say the latter is WAY more likely, just because I think most people intuitively disregard the words and focus on the number -- 36 being much bigger than 4.5, and people wouldn't even realize that they are equivalent. Is it like metric vs. non-metric...cause that's just units (I don't think they are similar). I'm just confused over this entire thing. DRosenbach (Talk | Contribs) 02:12, 5 October 2009 (UTC)[reply]

Can you copy here your original question?--Gilisa (talk) 07:14, 5 October 2009 (UTC)[reply]
The statment 4.5 times as likely are largely meaningless. Compare 1% to 4.5% and 22% to 99%. Taemyr (talk) 07:18, 5 October 2009 (UTC)[reply]

The odds ratio takes the sample size into account, the relative risk doesn't. In your example, if the sample size were 200 (instead of 100) then the odds ratio would been 7.364 (3 d.p.) So the odds ratio changes as the sample size changes. But the relative risk would still be 90/20 = 4.5. If your sample size is s and the number of men who drank wine was my and those that didn't was mn, where my + mn = s, and similarly with women. We always have the relative risk given by my / wy, but the odds ratio would be mywn / mnwy. The relative risk is not too useful then. This is highlighted by Taemyr's post. ~~ Dr Dec (Talk) ~~ 11:20, 5 October 2009 (UTC)[reply]

When the rare disease assumption does not hold, the odds ratio can overestimate the relative risk.
Thanks guys -- but what about this? I think it's basically my point -- I'm sure mathematics people can figure out many ways to represents relativity between two achieved results, but if the concept of the different methods has no bearing to not only the average person, but to even a large proportion of well educated people, isn't it sort of like generating a logical fallacy? DRosenbach (Talk | Contribs) 12:16, 5 October 2009 (UTC)[reply]
The odds ratio is larger than the relative risk if, and only if, mn < wn. The difference is zero if, and only if, mn = wn. The relative risk is larger than the odds ratio if, and only if, mn > wn I'm sure that the use of the measure depends on the input values. Could you leave a link to the quote that "When the rare disease assumption does not hold, the odds ratio can overestimate the relative risk." So that we might read it in context? ~~ Dr Dec (Talk) ~~ 17:10, 5 October 2009 (UTC)[reply]
Sorry 'bout that -- I made the quote into a link. I'm appreciative of your effort, but none of what you just said makes any sense to me -- I'm a periodontist studying statistical research papers for validity of methods, and balk at the sight of letters intermingling with numbers as they do in your explanation.
Essentially, after getting a handle on the distinction between the procedure for determining odds vs. that for determining probability, I am having difficulty understanding what I'm supposed to do with those numbers. What in the world does "X times the odds" mean? I'm grappling to wrap my brain around such a concept -- I find probability completely intuitive, yet odds completely out there without anything with which to figure out what it means. DRosenbach (Talk | Contribs) 19:24, 5 October 2009 (UTC)[reply]
Numbers? I didn't use any other numbers than those that you yourself used. My 7.364 (3 d.p.) was a reformulation of your 36. So I'm sorry that you "balk at the sight of letters intermingling with numbers", but maybe you should have tested your gag reflex with your original post? ~~Dr Dec (Talk)~~ 19:42, 5 October 2009 (UTC)[reply]
Moreover, your link is littered with number and letters living in harmony. Quite why you "balk" is beyond me. ~~Dr Dec (Talk)~~ 19:44, 5 October 2009 (UTC)[reply]

Furthermore, if you are a "periodontist studying statistical research papers for validity of methods" then why did you make your original post? I resolved it with some high-school algebra. ~~Dr Dec (Talk)~~ 19:49, 5 October 2009 (UTC)[reply]

The math is merely a tool to reach some figure -- I don't think I understand what 36 units of "times the odds" means versus 4.5 units of "times the chances". Just today, though, someone told me that odds refers to taking the result (such as being drunk) and figuring out what the odds are that that person fits into the more likely category (male vs. female) -- would that be accurate? DRosenbach (Talk | Contribs) 21:22, 5 October 2009 (UTC)[reply]
Having odds of m-to-n against means that in a sample of m + n the number of people with a certain property is n and the number of people without that property is m. The key point is that it depends on both the total sample size and the number of people with a certain property. Odds are a sample-size-invariant measure. ~~Dr Dec (Talk)~~ 22:28, 5 October 2009 (UTC)[reply]

dx[edit]

What I wanted to ask is, when we integrate a function with respect to x, say, we put dx at the end of the integral. Such that we would write ʃ f(x) dx. Now I understand this is to show us which variable we are integrate with respect to, but when we do the U substitiution it also seems as if the dx is treated as being some type of factor. For example :

ʃ ( 3x²/x³) dx let U = x³ du = 3x²dx, so we turn it into ʃU du, which equals ln |U|+C which equals ln|x³|+C


How does this work ? I realise that the derivative can be treated like a fraction, and can cancel out in that way especially when applying the Chain Rule, but why then is the dx treated as if it is some kind of factor as if we are saying that theIntegral of f(x) dx is the same as saying the Integral of f(x) times dx ? Thanks , The Russian. 202.36.179.66 (talk) 02:59, 5 October 2009 (UTC)[reply]

Think of dx as an infinitely small increment of x. An integral is a sum of infinitely many infinitely small quantities. And if, for example, f(x) is in meters per second and dx is in seconds, then f(xdx is in meters (see dimensional analysis). With derivatives, dy and dx are corresponding infinitely small increments, so that if dy/dx = 3 then y is changing 3 times as fast as x is changing. Michael Hardy (talk) 03:24, 5 October 2009 (UTC)[reply]
...and please don't use capital U and lower-case u to represent the SAME variable. That isn't done. Michael Hardy (talk) 03:25, 5 October 2009 (UTC)[reply]
While that is a good way to think about it, it isn't how it is usually made rigorous. Technically speaking, the "dx" is purely abstract notation and has no meaning in itself. The notation is a kind of mnemonic - it takes the place of the ("change in x") in the sum (that you get by chopping the area under the curve into bits, approximating each by a trapezium and adding together their areas - is the width of each bit) that you turn into an integral by taking the limit (you let tend to zero, which means the number of bits tends to infinity, each with zero area - when you add up that infinitely many zeros you get the answer you want, we call that integrating). --Tango (talk) 04:42, 5 October 2009 (UTC)[reply]
When one first learns any particular technique in calculus, one should ask himself/herself about its purpose. The purpose of the technique is perhaps its best intuitive representation. In the case of integration by substituition, the purpose is quite simple - convert the integrand to a much simpler expression, whose integral is computable by standard techniques. This is of course imprecise, but reveals the general idea behind the technique. However, may I ask whether you have ever wondered why this is so, or how this is accomplished?
Suppose we have an arbitrary function (f) defined on some interval [a, b] (here we consider definite integration). Geometrically, this integral is precisely "the area bounded between the graph of f and the x-axis." Suppose we were to consider the graph of f a few units to the right - would we still obtain the same area? A little thought will reveal that the answer is no. However, let us consider a concrete example of f, and let us consider its integral over [0, 1]. Before doing so, let us consider the notion of "the length of an interval."
The length of [0, 1] is 1 unit, in accord with our natural perception of length. Let us consider the image of [0, 1] under the function, f(x) = x2; does this image have a different length? Well, f(0) = 0, f(1) = 1, f is strictly increasing, and thus the image is simply [0, 1] and thus has length 1. Let us consider a different interval with the same function - [2, 4]. What is the image of this interval under f(x) = x2? Well, f(2) = 4, f(4) = 16, f is strictly increasing and thus the image is simply [4, 16] and thus has length 12. How may we write this in terms of the length of [2, 4] (of length 2)? Well 2 x 6 = 12, so we must find a general method which allows us to obtain 6. The method is actually quite simple - firstly, compute the derivative of f - this is the function g(x) = 2x. The midpoint of [2, 4] is 3, and g(3) = 6, and thus we have obtained 6 by rather simple means (the mean of 2 and 4 is 3 - note the pun!).
In general, given an arbitrary function h (assume that it is not too complex, in that assume that it is differentiable, or that it is a polynomial), and an interval [a, b], how can we determine the factor by which h "enlarges" (or "shrinks") [a, b]? I will leave this problem to you - note that intuitively, this should be related to "how fast h is changing" (thus the derivative of h), as we observed.
In your example, if we are integrating a function over a particular interval, the function may not always be in a simple form on that particular interval. However, on another interval, it may well be (consider the integral of f(x) = x + 1, over [0, 1], and note that this is the same as the integral of g(x) = x over [1, 2], by "transforming the interval" by f). Thus, perhaps, by defining a transformation from the initial interval to another one, the integral may be simpler to compute. However, the integral is intimately tied to the notion of an "area", and thus we must know the length of the new interval. This may be done by the notion of a derivative, as I have already described. When one refers to "u-substituition", one writes:
u = g(x)
du = g'(x)dx
In an intuitive sense, the derivative of g multiplied by the length of the old interval, equates the length of the new interval - du (when one considers "small increments", this formula works precisely (in that the error becomes miniscule)). The function u is the transformation onto the new interval. Hope this helps (and remember to answer my question). --PST 08:01, 5 October 2009 (UTC)[reply]

In answer to your question, yes, I have thought about why and how over many things in Mathematics, which is why I asked this about dx in the first place, because I have learned never to just accept things or take them at face value - this goes in all subjects, but is especially true for Maths. And it is certain that knowing why or how particular things work, such as the fact that derivatives are in fact functions anyway, which is why we can treat them as such when applying the Chain Rule, will assist anyone in their further exploration of the Atlantis that is Mathematical Truth. IF we just accept things without thinking why or how we may not get very far. The explanations given here do now make a lot of sense, and thank You everyone who contributed. TO me it doesn't make sense to study a discipline, then maybe try to teach it to others, without really considering what it is all about, or what it means. I was initially confused when I first encountered U substitution, but when one begins to think about it, then it starts to make sense. I had another question relevant to this which I will state in another place below here. THANKS. The Russian.202.36.179.66 (talk) 02:01, 6 October 2009 (UTC)[reply]

The Integral symbol is actually a stylized S for sum. It started off like Σ f(x) × Δx, the limit of the sum of lots of tiny rectangular bits as the width Δx of the bits tended to zero. Dmcq (talk) 12:19, 5 October 2009 (UTC)[reply]
Though this notation inspired the notation used with differential forms and not vice versa; you can look at fdx as a 1-form, in which case it really is f times dx. Additionally, u = x^3 has du = (3x^2)dx, d exterior differentiation. —Preceding unsigned comment added by 71.61.48.205 (talk) 13:28, 5 October 2009 (UTC)[reply]
I presumed that the notion of a differential form was too complex (and general) for the OP to comprehend. Some of the essential ideas which undermine these concepts are there in my post, however. --PST 13:33, 5 October 2009 (UTC)[reply]
I don't like to presume anything. Also, by reading things I didn't understand and trying to piece them together was how I learned half the things I did; thus, when I could actually learn about advanced topics, they weren't a mystery to me.. Finally, I didn't say anything was wrong with your answer, it is quite good. 71.61.48.205 (talk) 13:39, 5 October 2009 (UTC)[reply]
Actually, I take back what I said, your answer is not good at all. Consider g(x) = x^3, g takes [1,3] to [1, 27], length 2 to length 26. However, 2 * (2 ^ 2) = 8 not 26; though g'(2) should be 27 by your ideas. Of course, you might have meant that we can apply the mean value theorem, though your example using the mean of the end points doesn't convey this at all; especially to someone you assumed knew little of mathematics. The above notion of how the size of the intervals we are using in our partition goes to zero makes much more sense (this is the one I assumed was yours, should have been more cautious).
To op, this same notion of treating dy/dx like a fraction comes up in implicit differentiation aswell, as in the case x^2 + y^2 = r^2, we use 2xdx + 2ydy = 0, then solve for dy/dx, obtaining -x/y. Thus, supposing that du/dx = 3x^2, we can view this as du = (3x^2)dx; hence, if we are integrating (3x^2/x^3)dx, then using the identity gives (1/u)du. I'm aware that this isn't the most helpful answer, but you could consider d as a thing that takes functions f(y) to f'(y)dy, dy undefined. As another example of this, y = x^2 -> dy = 2x dx -> dy/dx = 2x. Finaly, then we observe that 3x^2 = d(x^3)/dx, thus (3x^2)dx = d(x^3)/dx * dx = d(x^3); using definite integrals, you have to change the inteval being integrated over, in which case dx is no longer undefined and takes on a geometric meaning, thus we view d(x^3) as being a different increment than dx.
Two things. I'm not big on the whole implicity differentiation idea as above for a number of reasons. Second, what I said above is not precise, nor the best way of viewing things, just a rough idea of how derivatives can have an algebraic nature; leafing through some more advanced things might be better. 71.61.48.205 (talk) 14:39, 5 October 2009 (UTC)[reply]

In Calculus I, the "dx" is best thought of as a placeholder only, and expressions such as

if u = x2 then du = 2x dx

should be taken as purely formal, nothing more than convenient notation.

In more advanced mathematics, there are at least two ways one can interpret the "dx", but these new interpretations are not necessary to understand the theory of calculus. One of the ways is to think of "dx" as a differential form, and the second is to think of "dx" as representing "dμ" where μ is a certain measure on the real line. Some of this is sketched in the lede section of this nice article on integration of differential forms by Terence Tao. — Carl (CBM · talk) 02:20, 6 October 2009 (UTC)[reply]

I disagree. The integral
should be regarded as the sum of infinitely many infinitely small numbers, since dx is infinitely small, and its units are those of ƒ(x) times those of dx.
Of course this is not logically rigorous. That means logical rigor isn't everything. Michael Hardy (talk) 03:37, 6 October 2009 (UTC)[reply]
I have no idea which part you claim to disagree with. If it's the first sentence, I don't want to get into a lengthy discussion about how the Riemann integral should be interpreted. Chacun à son goût. My main point was the second paragraph. — Carl (CBM · talk) 10:34, 6 October 2009 (UTC)[reply]
You can do calculus in terms in infinitesimals (even rigorously), but it isn't the normal way. See non-standard calculus. --Tango (talk) 14:37, 6 October 2009 (UTC)[reply]
What I disagree with is that it should be regarded only as a placeholder. Michael Hardy (talk) 13:31, 8 October 2009 (UTC)[reply]
You might want to consider this definition of dx. This is easy enough for high school and sufficiently rigorous: Assume f is differentiable at a point x, define dx to be the independent variable that can have any real value, and define dy by the formula dy = f'x)dx. --(1) So dy really represents a value achieved by a function. If dx is non zero then we can divide both sides of (1) by dx to obtain dy/dx = f ' (x) ---(2) Thus we have reached our goal of defining dy and dx so that their ratio is f'(x). Formula (1) is said to express (2) in differential form and dx,dy are called differentials. This is from CALCULUS by Howard Anton.--Shahab (talk) 16:09, 6 October 2009 (UTC)[reply]
Shahab, that "definition" is indeed in many calculus texts, but it makes no sense at all in the context of integrals, and I don't think it should be used in differential calculus either. Michael Hardy (talk) 13:33, 8 October 2009 (UTC)[reply]

Constant Morphisms[edit]

In category theory a constant morphism is defined as a c:X ->Y so that given any Z and any arrows g,h:Z -> X, cg = ch. That said, I was wondering if anyone was aware of any theorems/theories that make use of this notion, for some reason I find them interesting and all I seem to find about them is their definition. If anyone could point me in the direction of any work done using this concept, I would be extremely grateful. —Preceding unsigned comment added by 71.61.48.205 (talk) 13:02, 5 October 2009 (UTC)[reply]

You have not explicitly stated your mathematical knowledge and thus I should not state (or assume) it either. One example of a mathematical discipline which is closely related to category theory is homological algebra. However, there are many other mathematical disciplines which make use of category theory (which I shall leave subsequent users to explain). With regards to the notion of a constant morphism, I should say that it is a mere generalization which plays the same role in category theory as does the constant function in "traditional mathematics". --PST 13:28, 5 October 2009 (UTC)[reply]
Thank you for your response, I was mainly interested in the use of constant morphisms though rather than category theory. As for mathematical knowledge, most of my interests are in homotopy theory and logic. I'm curious for two reasons, first,all maps from a terminal into X are constant, second I've found a couple neat ways to use coconstants and related morphisms to define algrbraic-like objects in a category; thus, I wanted to see if anybody else has done anything similair, and if i led anywhere interesting.[Also, there I imagine there are some neat relations between constants into objects in a diagram and constants into the limit of that diagram, etc.] —Preceding unsigned comment added by 71.61.48.205 (talk) 13:35, 5 October 2009 (UTC)[reply]
Why won't you publish it here? If you sign it then it's yours.--Gilisa (talk) 15:01, 5 October 2009 (UTC)[reply]
Im confused as to what you mean by, "publish it here" I haven't really written an article or anything of that nature, and I don't believe this is the place for such things anyways. I'm here because I'm unfamilair with anyone else making use of constnats for any real purpose, and would like help determining if someone else has; not so much so that I'm not duplicating ideas, but because if someone has, I'm sure they've done much more than myself and I would like to see what. Thank you for the reply though. Also, signing something here doesn't really make it "yours", not that you really own mathematical results anyways. 71.61.48.205 (talk) 16:14, 5 October 2009 (UTC)[reply]
Well, guess you are right. Sorry for my silly suggestion :)--Gilisa (talk) 16:26, 5 October 2009 (UTC)[reply]
Please look on these (you might saw these already): [1][2] and this [3].
P.S. I don't know enough about this matter but it seem that if you category don't have enough morphisms (e.g., in categories when all of them are reversible) so the definition you provide might be not too successful because when you rely on the arrows you lose a lot of information.--Gilisa (talk) 16:54, 6 October 2009 (UTC)[reply]
This is the OP here. I'm not sure exactly what you mean, category theory is all about relying on the arrows. For example, the whole idea of "elements" in a set can be looked at in the categroy of SETS as being identical to the idea of maps from a given terminal object into a given object; all of these maps are constants. Another interesting question might be to consider when given an object X, how the constants in the limit of a family of monics all with codomain X relate to the constants in each monics domain. Also interesting, would be to consider if there are constant mappings X -> Y that can't be "gotten" via a mapping from a terminal object into Y. I'd also be curious of nonartificial examples of constant mapping X -> Y in concrete categories so that the actual function isn't constant in the usual sense, but only in the categorical sense. Anyways, I'm rambling, thank you for the links, and for all of your responses; if anyone knows anymore, I'd love to hear it:) 66.202.66.78 (talk) 09:15, 8 October 2009 (UTC)[reply]


Well indeed it give better defination for this term, but it dependents on the category. If you "throwout" part of the morphisms and workin in smaller category (for instance, instead of working with all functions only with sequential/regular) then the numnber of morphisms you have will become significantly smaller and then this definition will no longer represent reality...that's what I meant..--Gilisa (talk) 16:35, 8 October 2009 (UTC)[reply]

Proving the existence of a function[edit]

I am trying to prove that there exists a function J(x) such that J(J(x))=exp(x). I am also trying to prove that for the same function, J(0)=1/2. I have some reasons to believe that both of these can be proven, but I have no idea how to start a proof of either. If someone could lend some insight that would be great. Thank you Jkasd 20:40, 5 October 2009 (UTC)[reply]

See the stub Functional square root. Bo Jacoby (talk) 20:51, 5 October 2009 (UTC).[reply]

::Jkasd your wording is not very clear, do you mean that you have to find function so (J(J(x))=exp(x and J(0)=1/2 or that you have to find function that apply (J(J(x))=exp(x and that every such function have J(0)=1/2?

as I understand it, I believe that you can't prove it. You want to take function that match 1 to 1 and then match 1 to 2.7. It's not possible.--Gilisa (talk) 21:53, 5 October 2009 (UTC)[reply]
I'm not sure what part of Jkasd's wording you thought was vague; it was pretty clear to me that he/she was trying to prove that any solution of J(J(x)) = exp(x) must have J(0) = 1/2.
How are you concluding that J(1)=1 and J(1)=2.7 both have to be satisfied? --COVIZAPIBETEFOKY (talk) 22:07, 5 October 2009 (UTC)[reply]

Jkasd, it appears to me that there exist many solutions to your functional equation. If we let x ≥ 0, we have:

J(J(-x)) = exp(-x)
J(J(J(J(-x)))) = exp(exp(-x))
...
J2n(-x) = expn(-x)

which defines a chain of values of the (2n)th power of J on x. It cannot be pushed any further backward, to a solution y of -x=J(J(y)), since that would imply -x=exp(y), and exp does not take on negative values. There do not appear to be any restrictions on J(-x), as far as I can see. So we can make any solution by pairing up values of x≥0 so that every x is in a pair and no x appears twice (let's call each pair x0 and x1), and defining:

J(-x0) = -x1
J(-x1) = exp(-x0)
J(exp(-x0)) = exp(-x1)
J(exp(-x1)) = exp(exp(-x0))
etc...

Your suggestion of J(0) = 1/2 is effectively pairing x0=-ln(1/2)=ln(2) to x1=0. This is arbitrary; there are other solutions that work perfectly well without satisfying J(0) = 1/2. --COVIZAPIBETEFOKY (talk) 22:44, 5 October 2009 (UTC)[reply]

Of course, I didn't really prove that this works, but it's not too difficult to fill in the missing steps. What needs to be shown is that, in the above process, every real number r is assigned a value J(r) exactly once, so there's no conflict in mapping.
For an example set of pairs x0 and x1, let every x0 be between 0 (inclusive) and 1 (exclusive), and, for each of those, define x1 = 1/x0, unless x0=0, in which case x1=1. --COVIZAPIBETEFOKY (talk) 22:57, 5 October 2009 (UTC)[reply]
There was another thread on exactly this subject recently: Wikipedia:Reference desk/Archives/Mathematics/2009 August 9#Functional square root. -- BenRG (talk) 23:28, 5 October 2009 (UTC)[reply]
Thanks for the responses everyone. I didn't realize that this concept had a name, so knowing that helped me be able to research it more fully. So it seems that there are actually a whole class of functions with this property, but I still think that J(0)=1/2 might be a "natural" value for it, mainly because its graph looks nicer than for most other values for J(0). My math professor, who can speak German, is going to go over Hellmuth Kneser's paper about it with me when he has time. Jkasd 03:12, 6 October 2009 (UTC)[reply]


Here it's [4] (for those who don't have it)..But that's not a solution that you expect to on a forum like page, at the least not so immediately. BTW, Kneser was dealing with analytical solution for the problem. In this case you must have one solution (i.e., you can't choose the value at 0).--Gilisa (talk) 06:59, 6 October 2009 (UTC)[reply]

We can try to have analytical solution if there is no demand for one solution (this is the case here I see now..). You can start the builiding from this direction:

f(x) =

-1/x - 1/ln(½)........................ x in ( -∞ , ln(½) )

exp{ -1 / (x - 1/ln(½)) }.......... x in [ ln(½) , 0 )

continue from here........................................ x in [ 0 , 1 )

Or in other words:

we start with the definition of Injective copy, simple as we can

Ho(x) : (-∞,α] → (α,0]

Ho is pretty arbitrary and it's easy to choose it in the form Ho(x) = a/x + b where we determined a and b by the definition of H0 (x).

we continue with building : H1(x) : (α,0] → (0,eα]

H(1) is defined by the given functional equation as H1[H(x)] = ex

from condition f(0) = H1(0) = 1/2 you can extricate your α.

then continue with the building


--Gilisa (talk) 08:57, 6 October 2009 (UTC)[reply]

Fourier Series Representation[edit]

What is the Fourier series representation for this signal?

Sorry, I could not figure out how to insert a picture into the document so here is a link: Signal

Also, if the power of each component in the Fourier representation of the signal is given by the square of the component's amplitude, what is the medium bandwidth required to pass at least 80% of the power of this signal?

Partial sum for 1 ≤ n ≤ 5
Partial sum for 1 ≤ n ≤ 10
Partial sum for 1 ≤ n ≤ 100

129.186.55.233 (talk) 22:39, 5 October 2009 (UTC)[reply]

The Fourier series article supplies the formulas for finding the Fourier coefficients of a periodic function. Have you tried using those? If so, could you be more specific about what problems you're having? Rckrone (talk) 03:08, 6 October 2009 (UTC)[reply]
This page [5] should at least get you started if the Fourier series article isn't enough.--RDBury (talk) 12:26, 6 October 2009 (UTC)[reply]
  • You've got an even function, that is to say ƒ(x) = ƒ(−x) for all x in the domain of definition. In such a case the sine terms are redundant, since the sine function is an odd function. We only need to consider the cosine terms. Well, in the case of an even function ƒ we have
Where L is the length of periodicity, in this case L = 2. The coefficients are given by


Where ƒ(x) = x for all 0 ≤ x ≤ 1 and ƒ(x) = 2 − x for all 1 ≤ x ≤ 2. It follows that
We get the expression for a0 by deleting the cosine terms in the above integrands. Finally we see that
I've added some plots of the partial sums. I hope this helps. ~~ Dr Dec (Talk) ~~ 17:55, 6 October 2009 (UTC)[reply]
Cool! You made a lot of work for the OP, It's always nice to see helpful people--Gilisa (talk) 17:58, 6 October 2009 (UTC)[reply]
Given that cos(πn/2) is 0 for odd n, and (−1)n/2 for even n, the expression simplifies to
(or something like that, anyway). — Emil J. 12:05, 7 October 2009 (UTC)[reply]
Check also Triangular-wave function. --131.114.72.230 (talk) 08:53, 7 October 2009 (UTC)[reply]