Wikipedia:Reference desk/Archives/Mathematics/2007 January 11

From Wikipedia, the free encyclopedia
Mathematics desk
< January 10 << Dec | January | Feb >> January 12 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 11[edit]

Statistics & Propagation of error with least square fits[edit]

I originally came across this question my freshman year (at college), and although I am now a senior, I have yet to get a concrete answer. Basically, in my program (chemistry), we must do a lot of propagation of error, and there is one particular point that confuses me. Basically, if I plot a set of data that already has an associated error with it, how can that error be incorporated into the slope and intercept of a best-fit line? For example, if I plot absorbance vs. concentration of some solution, each point will have an error in the X direction (concentration, due to errors in preparation) and an error in the Y direction (absorbance, error and variablity due to instrument). If I calculate a best fit line, how can I incorporate these errors with the normal errors generated with a best fit? Let me know if you don't understand and I will dig up an example. As a similar question, if I take a set of points with error and try to take an average, how can these be incorporated into the error (and/or standard deviation) of the average? Again if you need an example, let me know. I would be happy with just a name of an equation or point me to a page. I would be very grateful :) Thanks a lot --Bennybp 02:39, 11 January 2007 (UTC)[reply]

Well I sort of found an answer in Bayesian linear regression, although it looks much more complicated than I wanted it to be. Anything simpler? --Bennybp 03:06, 11 January 2007 (UTC)[reply]
If your data sets are humungous, incorporating the individual point errors will have only a minimal effect. Otherwise, one method you can use is to replace each data point in your data set by K copies, each of which is randomly perturbed by X and Y amounts having the known error distribution for that point. Then compute the mean and variance of the miraculously multiplied data set. The mean should be the same, but the variance should be larger. If N was the size of the original set and you would have used the bias correction for the variance estimator of replacing N by N−1 in the divisor, then now replace KN by KN−K. The same method can be used for least square fits; in fact, computing an average is a form of least square fit in one dimension. Unless you use a fixed pseudo-random sequence you should get slightly different results each time; if the differences are too large, then N was not large enough.  --LambiamTalk 05:54, 11 January 2007 (UTC)[reply]
Depending on how variable the errors in the X variable are relative to the errors in Y, the standard least squares "best-fit" line may be a very poor fit. Our regression dilution article gives a good overview of the issues, with good references. You might also be interested in errors-in-variables models. -- Avenue 15:39, 11 January 2007 (UTC)[reply]
The average is easy: the variance of the sum (of independent variables) is the sum of the variances, and the standard deviation scales linearly with what it describes. Therefore the standard deviation of the average is just as the average is the sum divided by N; . I separated the two factors of to emphasize that the standard deviation of the average is the RMS of the individual standard deviations, divided by . It is that division which makes repeating a measurement worthwhile; the resulting error is better than the "average" errors of the individual measurements.
Handling something like least-squares is more complicated, but it can be handled by general error propagation methods — see the first equation for in that article. You have two functions f for the slope and intercept of your line; if you do the fit with only symbolic values (this gets complicated for large N), you can then take the required partial derivatives and evaluate the total error. As a trivial example, consider just two points: obviously the line passes through both so the slope is just and the intercept is . The partial derivatives for m are
we can use those (with the chain rule) to easily evaluate the ones for b:
The total errors (using Greek letters for uncertainties for brevity, with ) are then
.
Throwing in numbers I get . The much higher uncertainty for b comes from the distance the points are from the y-axis; perhaps a more useful measure would be the uncertainty at the midpoint, which is . (I doubt it's a coincidence that with two points the uncertainty at the middle is just the distance to the middle times the uncertainty in the slope.) Obviously this algebraic approach would become tedious for N any bigger, but one can probably make it somewhat easier using the linear algebra approach to LSF and a CAS. Does this help? --Tardis 17:06, 11 January 2007 (UTC)[reply]
That all seems like a big help to me. I didn't think of applying propagation of uncertainty to the linear regression equation. And I usually get everything mixed up anyway (error due to measurements, standard deviation, etc.). Thanks a lot :) --Bennybp 00:47, 12 January 2007 (UTC)[reply]

Tetrahedron fractal[edit]

Two iterations of the "Koch cube"
Two iterations of the "Koch cube"

My math teacher has two three dimensional fractals in her room. One is a Sierpinski tetrahedron and the other is constructed by adding a "pimple" (a smaller tetrahedron with a face area equal to one fourth of the first tetrahedron's face) to another tetrahedron. (I should probably link to tetrahedron). It is the second of these "fractals" I am interested in. The way this one works is that by adding the "pimple" tetrahedrons to each equilateral triangle face until the "fractal" begins to occupy more and more of a cube (the first stage is the stella octangula). Does anyone know what this construction is called (or understand all of these obfuscatory parentheticals). - AMP'd 02:49, 11 January 2007 (UTC) P.S. - Is there an award for most confusing ref. desk question?[reply]

Seems to be a 3D version of Koch's snowflakeKieff 04:08, 11 January 2007 (UTC)[reply]
Thanks. I have asked my math teacher and she does not actually know what it is called (her husband made both fractals), so if anyone cares, I will tell you when I find out. --AMP'd 20:54, 11 January 2007 (UTC)[reply]
Here's a couple iterations of said fractal. It's related to the stella octangula, like you said, though that hasn't helped me finding a name for it. So I'm calling it the Koch cube for now. — Kieff 21:57, 11 January 2007 (UTC)[reply]
Exactly! Someone needs to write an article once I figure out the name of the fractal. (Hopefully it will be in English and not math-speak.)--AMP'd 01:34, 12 January 2007 (UTC)[reply]
You seem to have a missing closing parenthesis. – b_jonas 08:54, 11 January 2007 (UTC)[reply]

I think the name for it is "cube". —David Eppstein 05:06, 12 January 2007 (UTC)[reply]

Touché... But not quite. The cube has finite area and a volume side³. This one has infinite area and a different volume, which I'm too lazy to figure out right now. Kieff 06:09, 12 January 2007 (UTC)[reply]
I suggest you do the calculation.--80.136.169.222 11:20, 12 January 2007 (UTC)[reply]
Yeah, on second thought, the volume seems to be the same that of the cube. The area is for the n-th iteration. — Kieff 11:30, 12 January 2007 (UTC)[reply]
Consider a sequence of curves from (0,0) to (1,1) following only horizontal and vertical segments of shorter and shorter lengths: (0,0)-(0,1)-(1,1), (0,0)-(0,1/2)-(1/2,1/2)-(1/2,1)-(1,1), etc. If you calculate the length at any step of this sequence, it is 2, but the limiting curve is a straight line segment which itself has length sqrt(2). The same principle applies here: the closure of the limiting shape really is just a cube so its surface area is the surface area of a cube, regardless of what the limit of the tetrahedral approximations' areas is. However if you instead view each tetrahedral approximation as an open set and don't take the closure, the eventual shape is missing some interior points of the cube; the missing points appear to have fractal dimension log26 so it doesn't make sense to talk about their surface area. —David Eppstein 16:52, 12 January 2007 (UTC)[reply]
Hmmm, I see your point. Since I'm not formally trained (yet) on the subject, I guess I'll make such mistakes. Thanks for the explanation (and more info is welcome!) — Kieff 21:41, 12 January 2007 (UTC)[reply]
I will give you what I gave Kieff, who at one point was the only one who was coming up with content:[1]. So is it just the "Koch Cube?" My math teacher is working on getting back to me. --AMP'd 20:32, 12 January 2007 (UTC)[reply]
No, it's not the "Koch cube", it's just the usual cube. Your own reference [2] agrees with David Eppstein that the limiting shape is a standard cube. --mglg(talk) 21:24, 12 January 2007 (UTC)[reply]
I fixed the description page for the image to mention this, by the way. — Kieff 21:56, 12 January 2007 (UTC)[reply]
OK! Thanks for all the help, smart math people... --AMP'd 00:37, 13 January 2007 (UTC)[reply]

BTW: the Hilbert curve is another example for a "fractal" object where the interesting structure cannot be seen in the corresponding point set.--80.136.167.106 10:05, 13 January 2007 (UTC)[reply]

discount[edit]

send sample questions on calculation of cash discount —The preceding unsigned comment was added by 62.24.113.179 (talk) 06:40, 11 January 2007 (UTC).[reply]

An item is listed at 10.00 (choose your own currency). What would you pay if offered a discount of (a)0% (b)100% (c)110% ? Do you want answers as well as questions?81.154.107.5 11:59, 11 January 2007 (UTC)[reply]
A sweater originally costing 1000 rupies (or whatever currency you desire) is marked down 20% during a sale. No one buys this sweater (it must be very ugly!) so it is marked down an additional 10% from the sale price. a) What is the final clearance price of the sweater? b) What is the total percent discount on the sweater? -sthomson06 (Talk) 16:56, 11 January 2007 (UTC)[reply]

Poisson question[edit]

I cannot figure out how to get the correct answer for this question.

"Between 2 and 4pm the average number of calls coming into a switchboard is 2.5 per min. Find the probability there will be 20 calls during one ten minute period."

I take lamba to be 25 (2.5 * 10) and r to be 20. Therefore I do the following calculation: e^-25 * (25^20/20!). This gives me an answer of 0.0519. However, the answer quoted in the answer section is 1.209x10^-6.

Thanks in advance,

StatsLover. —The preceding unsigned comment was added by 90.193.219.155 (talk) 14:09, 11 January 2007 (UTC).[reply]

  • I think the answer quoted in your answer section is highly suspicious, and that the answer you got to looks much more plausible. The expected number of phone calls is 2.5*10=25, and you are asked to find the probability of 20 calls. That should be a reasonably high probability, yet your quoted answer says that the probability is about one-in-a-million. To me that answer sounds plainly unreasonable and I suspect a typo in your book. Sjakkalle (Check!) 15:21, 11 January 2007 (UTC)[reply]
The problem is now: how did they manage to arrive at that wrong answer? My first thought was some error in computing λ, for instance by taking 2 minutes instead of 10, but the probability 1.209·10−6 corresponds to λ = 5.5438, and there is no rational way of getting that out of 2.5.  --LambiamTalk 18:01, 11 January 2007 (UTC)[reply]
Yeah, that's pretty weird. Maybe mutlipy λ by two and ad 0.5438? =) –King Bee (TC) 18:15, 11 January 2007 (UTC)[reply]
One very common of getting an inexplicably wrong answer in the answer section is to change the question but forget to change the corresponding answer. Sjakkalle (Check!) 07:11, 12 January 2007 (UTC)[reply]

Energy dissipation/viscosity in lagrangian mechanics.[edit]

Does anyone know how to formulate an artificial viscosity term in fluid dynamics via lagrangians? I understand it can be done with if the forces are trivial, but what about more complicated systems? If anyone knows of any books that might shed some light on this I would be most grateful. —The preceding unsigned comment was added by 86.145.254.48 (talk) 19:33, 11 January 2007 (UTC).[reply]

You may want to try this at the science reference desk instead. --mglg(talk) 00:55, 16 January 2007 (UTC)[reply]

Fractional reserve banking application[edit]

Suppose I have a series of terms, where each term is ;

Where n is the number of terms starting from zero, r is a fraction constant (the reserve ratio) and C is a constant (the initial deposit).

How do I find the sum of the series as n approaches infinity? I know the answer is C/r, but would like to know how to derive it.

Jonpol 19:34, 11 January 2007 (UTC)[reply]

With your permission, I will disregard the financial motivation for this question and focus on the mathematical issue. You want to compute the sum of an infinite geometric series, , where |q| < 1. Assuming that the sum exists to begin with (proving which would require some extra work), multiply this equality by q. You get . On the other hand, subtract a from the equality, and you get . These two expressions are the same, so you have , or which gives . -- Meni Rosenfeld (talk) 20:06, 11 January 2007 (UTC)[reply]
Thanks Meni for the simplification, just so you know, I am not in banking or finance.Jonpol 21:38, 11 January 2007 (UTC)[reply]

All 30 DJIA stocks posting gains/losses[edit]

Assuming 0 correlation (which is false, but assume so in order to maintain the index's "diversified" nature) the probability of all 30 stocks moving one direction in one day is 1/(2^29), correct? Assuming this, has this ever happened? If so, has it happened multiple times, or more times than probability expects? Sorry for all the conditional statements, but after searching google for a while I cannot find any evidence of ALL 30 Index stocks moving the same direction! Thanks, 140.180.21.169 21:57, 11 January 2007 (UTC)[reply]

If you assume zero correlation, you would expect this to happen once every 229 trading days, which (assuming 250 trading days in a year) is once every 2,147,483 years -- so if it has ever happened, even once, it has happened far more often than expected. A Google search for "all 30 dow components" turns up some interesting results: All 30 stocks went up in one day and went down in one day just last year. So not surprisingly, these stock movements are highly correlated. Dave6 08:13, 12 January 2007 (UTC)[reply]
The fact that they are linked means this has happened, see Black Monday (1987) or Wall Street Crash of 1929 for example. yandman 08:02, 12 January 2007 (UTC)[reply]

Clearly such days were immense losses for the market, but am I wrong in assuming that at least ONE of the components should move positive, even in crashes, or negative in booms? If not, the correlation must be much higher than I assumed. (On a side note of curiosity, are there stats for the actual correlation coefficient between the 30?)140.180.4.46 17:00, 12 January 2007 (UTC)[reply]

The correlation is even more subtle than that; there are more refined ways of characterizing financial time series. --HappyCamper 13:33, 13 January 2007 (UTC)[reply]

Such as? 140.180.1.250 17:41, 15 January 2007 (UTC)[reply]