Wikipedia:Reference desk/Archives/Mathematics/2010 April 21

From Wikipedia, the free encyclopedia
Mathematics desk
< April 20 << Mar | April | May >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 21[edit]

Teabags[edit]

The mass of tea in Supacuppa teabags has a normal distribution with mean 4.1g and standard deviation 0.12 g. The mass of tea in Bumpacuppa teabags has a normal distribution with mean 5.2g and standard deviation 0.15g.

i) Find the probability that a randomly chosen Supacuppa teabag contains more than 4.0 g of tea [SOLVED: normalcdf(4.0,E99,4.1,0.12) = 0.798]

ii) Find the probability that out of two randomly chosen Supacuppa teabags, one contains more than 4.0g of tea and one contains less than 4.0g of tea. [SOLVED: normalcdf(4.0,E99,4.1,0.12) = 0.798, normalcdf(-E99,4.0,4.1,0.12) = 0.202, 0.798*0.202*2=0.323]

iii) Find the probability that five randomly chosen Supacuppa teabags contain a total of 20.8g of tea.

I tried normalcdf(20.8,E99,4.1,0.12), normalcdf(20.8,E99,20.5,0.12) and normalcdf(4.16,E99,4.1,0.12) but none of the three give the correct answer. I just need a hint, what method to use.

iv) Find the probability that the total mass of tea in five randomly chosen Supacuppa teabags is more than the total mass of tea in four randomly chosen Bumpacuppa teabags.

I think once I figure out the method for iii) I can solve this but need your help for iii). If you ask me to do my own homework, note I already solved i) and ii) and my teacher sucks at explaining all these concepts. —Preceding unsigned comment added by 166.121.36.232 (talk) 09:53, 21 April 2010 (UTC)[reply]

Do you know what is the expectation of the sum of variables, and what is the variance of the sum of independent variables? -- Meni Rosenfeld (talk) 13:30, 21 April 2010 (UTC)[reply]
It will likely help to actually define some random variables, and then translate your problems into the probabilities of various events. For instance, say that is the mass of a randomly chosen Supacuppa teabag, and is the mass of a randomly selected Bumpacuppa teabag; then the distributions of and are as stated in the statement of the problem. In part (i), you are being asked to calculate ; I would generally recommend going through the motions of actually standardizing and expressing this probability in terms of the tail probability of a standard normal r.v. , though that's my own cup of tea (sorry for the terrible pun). In part (ii), you are asked to calculate , where and are two independent (i.e. randomly chosen) r.v.'s sharing the same distribution as . The independence is a crucial assumption here, as this allows you to assert that , which is as calculated in the OP.
For part (iii), let be five independent r.v.'s sharing the same distribution as . Then the "total mass" of five randomly selected Supacuppa teabags has the same distribution as , and the requested probability is (or perhaps , or even , the wording on that part is a bit vague). However, the point is, you need to be able to determine the distribution of ; this information should have been given to you in your class, though you might review some of the important properties of the normal distribution (particularly, the first property there, and note that it may be generalized to any finite sum of independent normal r.v.'s). Once you have the distribution of , the calculation of the probability is, in principle, extremely routine. Part (iv) can be done in a similar fashion. Nm420 (talk) 00:40, 23 April 2010 (UTC)[reply]

Wronskian and Linear Independance[edit]

The article on the Wronskian gives an example of two examples that are linearly independent with a Wronskian of zero. The second function used is defined as the negative of the first function for negative x's, and first function for positive x's. I was wondering if there was an example of two linearly independent and infinitely differentiable functions that have a Wronskian of zero. I would imagine that the second function would still have to be defined piecewise, with f2(x) = 0 for x<=0 and f2(x) = f1(x) for x>0, but I can't seem to make this second function infinitely differentiable. 173.179.59.66 (talk) 15:14, 21 April 2010 (UTC)[reply]

For instance, two non-vanishing, infinitely differentiable functions with disjoint support have, of course, a vanishing Wronskian, though they are linearly independent. (Consider e.g the smooth function f(x) defined here and the function f(2-x) ). "Analytic" instead of "C" works (precisely, if W(f,g)=0 for two analytic functions defined on a connected domain it follows that f and g are linearly dependent. Indeed, either g vanishes identically, or f/g is a constant). --pma 15:25, 21 April 2010 (UTC)[reply]

What came first? (trig + calc. related stuff)[edit]

So I've been looking into precalc/calc math a bit to see how things build upon each other and I've kinda gotten a bit stuck. I've always been told the sine and cosine angle sum formulas without any reason as to why they are true. Progressing through calculus, I have seen their use in the derivations for the derivatives of sine and cosine through the limit definition of the derivative. Knowing those, one can derive the Maclaurin series for the sine and cosine. Rearrangement of the Maclaurin series for gives Euler's formula. Plugging in into Euler's formula can prove the sine and cosine sum formulas. Clearly, we're just going in a circle. Which of these steps came first? How did we know that the derivative of sine is cosine without knowledge of the sine and cosine sum formulas? Or, alternatively, how did we know of the sine and cosine double angle formulas without the knowledge of the derivatives of sine and cosine? All the proofs that I've seen for these two things somehow have either gone back to calculus or gone back to the sum formulas, and I can't seem to find something to prove these concepts with the other known properties of the trig functions. How did these ideas come to be? — Trevor K. — 21:40, 21 April 2010 (UTC) —Preceding unsigned comment added by Yakeyglee (talkcontribs)

The sum of angles trig identities were first. See the image: x and y are angles, a through h are lenghts. One lenght is assumed unit, this will save a bit of writing. Now by simple cascade of sines and cosines:
and finally:
HTH :) --CiaPan (talk) 22:14, 21 April 2010 (UTC)[reply]
The answer, I think, is "it depends". For a lot of professional mathematicians, the sine and cosine are defined as power series. The fact that they have a beautiful interpretation in terms of triangles is something that you have to prove from the definition. That's not too hard once you know about complex exponentials, and if you have complex exponentials, you can also prove the sum formulas.
Classically, there were no such thing as complex exponentials. The sine and cosine were defined in terms of triangles. Under this approach, you need to prove the sum formulas using triangles as CiaPan just did for us above. Once you have that, then you can take derivatives using the definition. Both these approaches are logically correct. You have a good question, though: You noticed that we can't mix the two, or else we get a circular argument! Ozob (talk) 04:31, 22 April 2010 (UTC)[reply]

I was fortunate in that in 8th and 9th grades I had a math teacher who was honest, as opposed to one who says "This is important material for you to learn; you'll understand why later" when the instructor saying that does not in fact understand. In 9th grade we went through careful geometric proofs of these identities. I would think those must be much older than calculus; I suspect that Regiomontanus knew these identities. Michael Hardy (talk) 22:00, 23 April 2010 (UTC)[reply]

The trigonometric sum formulas follow from Ptolemy's_theorem. It is very old knowledge. Bo Jacoby (talk) 13:05, 27 April 2010 (UTC).[reply]