Wikipedia:Reference desk/Archives/Mathematics/2006 December 9

From Wikipedia, the free encyclopedia
Mathematics desk
< December 8 << Nov | December | Jan >> December 10 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 9[edit]

Rational denominators[edit]

Suppose I had a number like

What's a good algorithm to use to express this number so that the denominator is a rational number? The numerator can be as messy as we like. --HappyCamper 02:57, 9 December 2006 (UTC)[reply]

I'm sure this isn't the best way, but it is a start, I think you can rewrite that fraction as:
Then you rationalize that denominator by multiplying top and bottom by , you'll end up with only two square root terms in the denominator instead of three, and you can rinse and repeat until all are gone. Maybe it's possible to multiply by something to get rid of it all at once? 137.99.174.5 04:19, 9 December 2006 (UTC)[reply]
The numerator doesn't have to be very messy. The number is an element of the field , so it can be expressed as
where all the as are rational numbers. As for how to get those coefficients, the most straightforward way is to keep multiplying by conjugates until all the roots disappear, i.e., multiply by
to get rid of all the s, leaving you with
where the pops up unavoidably because it's in the field generated by and . But don't worry, because the conjugate is easily obtained by negating everything with a multiple of three under the radical,
leaving you with . This process is tedious but it's guaranteed to work and requires no ingenuity. —Keenan Pepper 05:23, 9 December 2006 (UTC)[reply]
You can treat this somewhat like partial fractions: write , and then write it as . We can restrict i, j, and k so because the square of one of our variables is a rational number and is thus largely equivalent to that variable to the 0th; in a sense, the powers of this polynomial live in (link). Then clear the denominator, giving , where ; the factors of show up because of the imperfect wrap-around. But we then know that (see discrete delta), since the left hand side is precisely 1 and has no radicals. Choosing some ordering for the , we can write (for whichever n corresponds to ) and it becomes a standard matrix equation. For example, taking the lexicographic ordering of the bitstrings "001", "010", etc., I find
with the solution . If we generalize the problem to include other coefficients than 1, we can just add those to our expression for B; however, other generalizations are harder. For instance, if we had , we would have to worry about cancellation between products of the first two radicals and the third alone (that is, multiplied by 1): it might do, however, to just identify and (and similarly for b) and so remove some equations. I don't know that this method is particularly practical, but I certainly find it interesting. --Tardis 06:27, 9 December 2006 (UTC)[reply]
When you expand (A+B+C+D)(A+B+C−D)(A+B−C+D)(A+B−C−D)(A−B+C+D)(A−B+C−D)(A−B−C+D)(A−B−C−D) you get a (symmetric) polynomial in the squares A2, B2, C2, and D2. So multiply numerator and denominator both by all factors of sums of 1 plus-or-minus these surds with at least one minus. This is essentially the method of multiplying by conjugates suggested by Keenan Pepper, but requires even less exercise of one's grey brain matter.  --LambiamTalk 09:56, 9 December 2006 (UTC)[reply]
Thanks for the responses guys. Tardis' one reminds me very much of digital communications systems. Quite creative. --HappyCamper 02:44, 10 December 2006 (UTC)[reply]

Doodles, recursive curves?[edit]

I was playing around the other day, and I came up with an interesting little thing:

Plot three points somewhere on a coordinate plane. Numbering them A B and C, with B being the one in the "middle", find the midpoints of AB and BC and name them D and E. Draw AD, DE, EC. Find the midpoints of AD, DE and EC (F, G, H) and draw AF, FG, GH and HC.

If you keep doing this, you get something that looks more and more like a smooth curve, anchored at A and C and "pointing" toward B. My question is: can this be described with a function? Since it can be defined recursively, is it possible to write a function that models the "limit" of the recursion? BungaDunga 05:04, 9 December 2006 (UTC)[reply]

My guess is that the limit exists and is some sort of a curve on a circle or an ellipse. If you take points A, B, C to be from an equilateral triangle, the first iteration gives you a portion of a hexagon. The next, gives you a section of a 12-gon. Then, a 24-gon, and so on. I'm not sure how to make this rigorous though. --HappyCamper 05:22, 9 December 2006 (UTC)[reply]
I didn't think of that- very interesting. One difficulty I think is that the number of points flip from odd to even and back again, so the point closest to B only changes every other iteration. I think I managed to find a parametric equation that looked like it found the point nearest to A or B given the number of iterations; I don't remember it though, I'll have to find the bit of paper I worked it out on. --BungaDunga 05:36, 9 December 2006 (UTC)[reply]
Maybe I'm wrong, but it seem like what you get at the end may be a Bézier curve (look at the animations near the middle). --Daniel Olsen 05:44, 9 December 2006 (UTC)[reply]
Hmm. This construction looks very much like what I did, and it says it produces a quadratic Bezier curve. The construction looks slightly different, but I'm half-convinced it's functionally identical. --BungaDunga 06:09, 9 December 2006 (UTC)[reply]
I'm certain you're describing a Bezier curve, though I'm not sure you've worded it correctly. Given a number of points A B C D, you find the midpoint of each in sequence AB to E, BC to F, CD to G, then find the midpoint of each of these H and I, then the midpoint of that, and you've found the midpoint J of your Bezier curve. Replace "midpoint" with some other ratio, ie. 10%, and you will find the point 10% along the curve, etc. - 07:57, 9 December 2006 (UTC)
As described, this is almost the midpoint subdivision algorithm for a quadratic Bézier curve. More generally, it is the beginning of an exploration of "corner cutting" and "stationary subdivision". Professional computer graphics today uses subdivision surfaces extensively. One of Pixar's founders, Ed Catmull, was a pioneer in the use of subdivision, and one of its researchers, Tony DeRose, along with his student Charlie Loop, made important later contributions. (So have many others, including Jos Stam.)
A popular modern way to approach Bézier curves, and especially their generalization to B-spline curves, is through polar forms, often known by the evocative name Lyle Ramshaw once suggested, "blossoms". (His influential paper, "Blossoming: A connect-the-dots approach to splines", has lovely illustrations by one of our contributors, Jorge Stolfi.) Given points A, B, and C (in that order), the degree-2 Bézier curve they determine is
The polar form of a degree-n polynomial in one variable is a polynomial in n variables, linear in each. (The idea originates with "polarities" in geometry, the seminal example being the polarization of Pythagoras' x2+y2 to produce the dot product x1x2+y1y2.) An algorithmic embodiment of the polar form is a slight generalization of de Casteljau's algorithm, which for a quadratic looks like this:
When t is 12, we are splitting each line segment at its midpoint. The process actually proposed, however, is not evaluation, but resembles subdivision. That is, we take the original two legs and split repeatedly to get many legs. And while it may not be immediately obvious, it is well-known that, properly done, this process converges to the curve. An important early example is Chaiken's algorithm.
For one kind of analysis, it is convenient to view a Bézier curve as a special case of a B-spline curve, and de Casteljau's algorithm as a special case of de Boor's algorithm. The control points of a curve parameterized over the unit interval [0,1], are given by the three polar form values P2(0,0), P2(0,1), and P2(1,1). (Verify!) The control points for the portion of the curve from a to b are, similarly, P2(a,a), P2(a,b), and P2(b,b). The midpoint subdivision essentially splits the whole curve into two halves, then splits those, and so on.
For subdivision surfaces, the converged result at "extraordinary points" cannot be described by a polynomial. The pivotal insight for analyzing the limit surface is to express stationary subdivision in matrix terms, then consider eigenvalues. Of course, we can do the same thing for curves.
I understand such mathematics is much too "applied" for some tastes, though historically great minds like Archimedes, Euler, and Gauss have thrived on applications. So let us close with the observation that barycentric subdivision, a purely topological process, can be used to obtain the nerve of a simplicial set, which is pure abstract nonsense. --KSmrqT 02:00, 10 December 2006 (UTC)[reply]
No, that's not the same algorithm. The difference between Chaikin's algorithm and the method proposed above is that in Chaikin's algorithm part of the original line segment remains; in BungaDunga's method the original line segment is completely discarded. As a result the process doesn't converge to a true limit curve but simply continues as far as is possible. For example, consider the process on a closed polygon such as a square; you just get nested squares and diamonds down to the point. Similarly, as explained below, on a line segment with control point the process just converges down to the line segment itself.
Nice research, though. EdC 14:53, 10 December 2006 (UTC)[reply]
Thanks. I agree the proposed method is not Chaiken's algorithm. In fact, I was hinting that you might want to try eigenvalue analysis as an alternative or supplement to your previous approach.
The real payoff is for surfaces. For many years the standard surfaces used in everything from aircraft and auto body design to quality animation were polynomial parametric surfaces. There has also been a limited used of implicit surfaces, including quadrics and "blobbies" and an occasional convolution surface; but these are awkward for "sculpted surfaces". Benefits of subdivision surfaces were visible in an early animation from Symbolics called The Little Death, but the work of Loop and Stam was pivotal in bringing them under control. Sadly, the desktop computer graphics in games still clings to polygonal models, partly for performance reasons. But ever since the Academy Award-winning Geri's Game, Pixar (which always insisted on true curved surfaces) has used subdivision surfaces. Subdivision's success in the CAD industry is still uncertain. --KSmrqT 20:40, 10 December 2006 (UTC)[reply]
No, that's not a Bezier curve; in the divide-and-conquer bezier method the midpoints drawn lie on the curve. That isn't the case here; the iteration always draws the new lines beyond the previous cycle's midpoints.
Actually, this iteration does tend to a curve, and one with a very simple equation, but it's probably not the result you were hoping for; it tends to the line AC.
Argument: Consider where ABC are (1,0), (0,0) and (0,1) resp. By symmetry under skewing, scaling and rotation, finding the limit for this configuration gives the limit in all cases.
Write
Then ;
AFGHC =
Note that the two rows of the matrix are the same in reverse; note also that we can multiply through by an appropriate power of 2 to get integers throughout.
Denote the multiplied top row by ; we have:
This is, of course, A008949 (discarding the initial zeros).
Now consider the midpoints of the even rows: 0, 1, 5, 22, etc. By the symmetry mentioned above, these get drawn as points B=(0,0), G=(1/4, 1/4), (5/16, 5/16), (22/64, 22/64), etc. If we can find the limit of this sequence of points on y=x, this will tell us plenty about the curve.
Well, I can't work out the reason, but plug that sequence into AEIS and you get A000346, which has closed-form formula . Divide by the corresponding power of two and get , which tends to 1/2. Thus the midpoint tends to the midpoint of AC, so the iterated curve has as limit AC.
Now, if I could only work out why the second-left-of-middle column of A008949 is A000346... EdC 18:34, 9 December 2006 (UTC)[reply]
After playing a little more, another fun result: if we take the coordinates and calculate the area of the triangles being chopped off each time, we get:
1/8
1/32 1/32
1/128 3/128 1/128
1/512 6/512 6/512 1/512
This is (ignoring the denominators) the triangle of Narayana numbers (prove it!) with row sums the Catalan numbers... and , showing again that the whole of the triangle ABC is eaten away by the iteration. EdC 22:20, 13 December 2006 (UTC)[reply]
Very interesting! I constructed a few iterations of it in Geometer's Sketchpad (great program) and it definately looks much more like it's going to converge into a line than anything else. Almost all of the math- and terms- you've used is (unsurprisingly) way over my head, but the example of a square EdC gave makes a lot of sense. I'll definately play some more with constructions that actually produce interesting curves.--206.124.138.153 23:51, 10 December 2006 (UTC)[reply]

Starting with equilateral triangle of height 1: Distance from top to bottom of curve: 1-1/2-1/8-1/32-... = 1-1/2*(4/3) =1/3. I just did this calculation to check if it was a line. As pointed out previously, if this were done on an equilateral triangle, all angles in the curve would always be equal. Taking an equilateral triangle ABC and using points D,E,F to create equilateral triangles ABD, BCE, and ACF outside of ABC, we can do this process to these three triangles to see that the curve would approximate a circle. Imagine we stretch the equilateral triangle horizontally and verticall; we can also shear it (like going from a rectangle to a paralellogram); and we can finally rotate it...this set of transformations performed on a circle will give an ellipse. Thus, your curve will be an ellipse. 74.69.109.91 06:34, 11 December 2006 (UTC)[reply]

New Year's moment[edit]

Am I right?:

New Year's moment should only be celebrated at 12 midnight on leap years. On a year after a leap year, it should be celebrated on January 1 6AM. On a year two years after a leap year, it should be celebrated on January 1, 12noon. Ane finally, on a year 3 years after a leap year, it should be celebrated on January 1 6PM.

The above rules should be followed because a year is (approximatley) 365.25 days, and not 365 days. The rules should be followed so that New Year's moment is celebrated when the Earth is in exactly the same position of its orbit each year.

Am I Right?

Has anybody ever suggested this idea before?

(*I call it 'New Year's Moment' because it is exactly when one year turns inta the other) --172.146.41.157 08:45, 9 December 2006 (UTC)[reply]

The sidereal year is 365.25636 days, about 20 minutes longer than the tropical year. This is a more solid criterion. If applied, it means that in some 3000 years we'll be having our New Year's moment in the middle of February.  --LambiamTalk 10:16, 9 December 2006 (UTC)[reply]
New Year's is more a cultural thing than an astronomical thing. For the same reason, the New Millenium was celebrated on the day the numbers rolled over (Jan. 1, 2000), rather than after the last year of a 2000-year calendar (Jan. 1, 2001). It's more about what strikes people as noteworthy (in other words, as a good excuse for a party) than what seems technically accurate. Black Carrot 21:01, 9 December 2006 (UTC)[reply]

Probability of article selection[edit]

This is for Portal:Kerala. The portal currently employs an automatic selection process for the "selected article" on its main page. The following code is evaluated and the corresponding article is selected.

{{CURRENTDAY}}*{{CURRENTMONTH}}*{{CURRENTWEEKDAY}}/(31*12*7)*X round 0

where X is the current number of selected articles. The formula yeilds 0 for nine selected articles, thus selecting Portal:Kerala/Selected articles/Selected article 0 from the pool.

So my question is this: In any given year, would ALL the articles in the pool be displayed on the portal, or would some be left out? Thanks for the answer!--thunderboltz(Deepu) 10:45, 9 December 2006 (UTC)[reply]

If X is small compared to the number of days in a year then this algorithm will give a sequence of numbers that is fairly evenly distributed across the range 1 to X (I am assuming you are always rounding up to the next integer), so then yes, every article will be displayed over the course of a year, although articles with low numbers will appear much more often than articles with higher numbers. If X approaches 365 20, then you will get discrepancies - some articles will be displayed twice in the year, some only once, some may not appear at all. If X is greater than 365, then you cannot display every article in a given year (there are not enough days) so some will be left out. If you want more precise predictions, you can model this algorithm quite simply using a spreadsheet (which is what I should have done before I gave my first response !). If you want every article to appear more or less the same number of times during a year then this is not a satisfactory algorithm because the sequence that it gives is so skewed towards low numbers.Gandalf61 11:26, 9 December 2006 (UTC)[reply]
If you can do modular arithmetic, use
({{CURRENTDAY}}+31*{{CURRENTMONTH}}) mod X,
which will give you a number in the range from 0 to X−1.  --LambiamTalk 15:36, 9 December 2006 (UTC)[reply]
Gosh, so the forumlas of no good use?! Thanks for taking the trouble to verify that, Gandalf. Lambiam, I will try that suggestion of yours and get back to you. Thanks again!--thunderboltz(Deepu) 14:05, 11 December 2006 (UTC)[reply]

Verizon maths[edit]

A bit OT, but you really, really need to check out this link [1]. It's so funny. To make it on topic, has anyone else ever had problems with companies which really, really don't understand maths? (Actually it may not be a maths problem but more of a well the computer says it so it must be right problem that is so common with customer service reps etc). To be fair, IMHO this guy was a bit silly to try and sort it our of the phone. An e-mail or letter or fax clearly laying out the maths would have worked much better. Still, you can't deny it's funny... :-P Nil Einne 16:32, 9 December 2006 (UTC)[reply]

I've seen this mistake a lot -- for example, grocery store selling bananas for .59 cents/lb. Not usually a problem, since no one expects to take 8 pounds of bananas to the register and only owe a nickel. So at first I thought the guy whose blog you pointed to was being disingenuous. What's funny is that unlike bananas, no one has a clue what cell phone data transfer "should" cost. I can't think of any other product like that. So basically it's a common mistake that rarely causes any problems, but this time it did. Dave6 01:57, 10 December 2006 (UTC)[reply]

Something that's even more confusing is mills, which are thousandths of a dollar (or tenths of a cent), in the US. This term is used almost exclusively in property tax discussions, where a vote to raise or lower property taxes is called a millage vote to change the millage rate: [2]. So, when they say they want to raise property taxes by 0.5 mill, people really have to scratch their heads to figure out how much money that is. That means, you would pay $0.0005 more for every dollar in assessed property value, or 1 cent in property tax more, for every 20 dollars of assessed value of your home, I believe, but it's a challenge for me, too. So, if you own a $200,000 home, that would mean $100 a year in additional taxes. Did I do the math right ? StuRat 12:21, 10 December 2006 (UTC)[reply]

That was very funny. It was also quite fascinating, because I kept thinking of how I'd try to explain the problem, different ways I'd try to make them see the issue. Amazing that he had to go through so many people, none of whom got it. Maelin (Talk | Contribs) 00:00, 11 December 2006 (UTC)[reply]

Probability[edit]

I ask this question every few months, and it's a bit different every time, because I learn more about what I'm trying to ask. I think I finally have it in a form other people might understand:

  1. Law of Large Numbers- Given a random event, where no possible result is more or less likely than any other and no instance of the event is influenced by any other, certain things naturally follow as the event is repeated over and over. In this case, the fraction (instances of possible result/total repetitions) will almost certainly come increasingly close to the fraction (1/number of possible results), and limit to it. For example, given something with two results A and B, as the number of instances increases, most likely nearly 50% of the instances will be A and nearly 50% will be B.
  2. Deterministic System (mathematics)/Deterministic System (philosophy)/Clockwork Universe- It is generally accepted that, barring quantum randomness (which is pretty well irrelevant to this anyway), the universe is fully deterministic. If someone were to, for instance, flip a coin 1000 times, there was never really any question how it would turn out, except in our limited minds, with our tiny tiny store of available information. Even though we didn't know how it would turn out, it was always going to turn out a particular way. Also, given things like gravity and the butterfly effect, it seems sensible to say that nothing exists that is independent of any other event. More practically, in the case of coin flipping, how your mind responds to the results so far almost certainly influences how you toss the coin, and how far you have to stretch to pick it up may have some effect on exactly how your muscles move.
  3. Pseudorandom Number Generator- Not all things that display results indistinguishable from theoretic randomness are, by any stretch of the imagination, random. Any cluster of deterministic interactions that produces results statistically similar to randomness can be considered a pseudorandom number generator, and can range from high-quality (very similar to random) to low-quality (not really very random-looking). Take some examples from the world. The movement of the planets is governed by the laws of physics and is therefore deterministic, but nothing about it really seems random. The movement of air particles is also governed by the laws of physics and is therefore also deterministic, but appears so random that you can find plenty of sources online for air-generated pseudorandom numbers. The fall of a coin is similar - fully deterministic, yet statistically indistinguishable from randomness and in most people's minds, nearly the definition of it.
  4. Where is the pseudorandomness coming from in a flipping coin? Let's start by taking out all the unneccessary outside influences. Put a person in a large metal room with a quarter. According to the theory, the flipping should still look random, but anything outside the room short of an earthquake should have negligible effect. A quarter is heavy enough to cut through the air, so unpredictable airflow can be ignored, and the floor can be flat, level to gravity, and evenly bouncy throughout. Nice and simple. Now, given this, it's pretty clear the pseudorandomness must happen before the quarter leaves your hand. Once it does leave your hand, it's pretty simple highschool physics to say that it will follow a parabolic arc to the floor, that given its spin it will flip over a certain number of times before that, and that once it contacts the floor it will either settle or bounce up in a fairly straightforward way. However, since a small change in the angle or speed of the throw, or the initial spin, or the height you start it at, can switch the result around, it does make it a bit of a dowsing rod for the actual psuedorandomness, which by the process of elimination must be within the confines of your skin. Now, it's well known that with practice (and no life to speak of) people have managed to gain complete control over a coin. They can make it land on whichever side they choose. This involves 1) Finding out, given a particular height and spin and so on, what the result will be, and 2) Getting enough muscular control to provide those chosen conditions consistently. So, the source of the pseudorandom behavior of the coin is unawareness of the results of particular motions combined with whatever quirks of the system get ironed out through practice.

My question, then, is whether anyone can explain to me what about this deterministic system produces results indistinguishable from randomness. In the case of a formula (Lagged Fibonacci generator), I could just slap the thing on a metal table and dissect it, but that's harder to do with something so complicated, and about which so little is known. And I'm looking for an answer that doesn't use the words "likely" or "probable". I'm trying to look at it in terms of a set of rules and a seed state, and I want to know how, as this algorithm runs, it outputs something with all the requisite statistical properties. Black Carrot 22:43, 9 December 2006 (UTC)[reply]

First of all, how many of the different definitions of entropy (Entropy (classical thermodynamics), Entropy (statistical thermodynamics), Information entropy...) are you familiar with? —Keenan Pepper 04:41, 10 December 2006 (UTC)[reply]
In classical mechanics, the motion of a tossed coin is entirely determined by its initial position, momentum, orientation and angular momentum. So you can, in theory, assign a definite and predictable outcome of "heads" or "tails" to every point in a multi-dimensional phase space of initial states. But you may find that there are regions in that phase space where each point has some points arbitrarily close to it that are "heads", and other points arbitrarily close to it that are "tails". In other words, the outcome is very sensitive to initial conditions - an arbitrarily small change in the initial state can change the outcome from "heads" to "tails" or vice versa. If this is true, then the tossed coin is a chaotic system - it is completely deterministic, but at the same time unpredictable in any practical sense. Gandalf61 12:51, 10 December 2006 (UTC)[reply]
Keenan Pepper- I read through the articles you linked, but I don't understand how they connect. I don't know much about entropy yet, but give me your interpretation and I'll run with it.
Gandalf61- Well said. And it turns out, to my astonishment, that the motion of the coin between leaving the thumb and settling is indeed chaotic, mainly due to how it bounces upon striking the ground. (I confirmed that by following links from the Coin flipping page, which I just discovered.) However, I've thought about it carefully, and I don't think that's what I'm going for.
I'm going to break the problem up into two parts for a moment. Bear with me. One is between a starting condition and its result (call it Part A) and the other is between consecutive starting conditions (call it Part B). Part A is what Gandalf was talking about. Given a phase space for the coin as it leaves the thumb (or any other phase space), you figure out what result to tack on each point. That's useful, and in a way it does answer part of my question, but it avoids what I'm getting at the same way the Law of Large Numbers did the last time I asked. Both of them are about turning one distribution into another. This doesn't produce likelyhood or randomness unless that's what you start out with as input, which is the exact thing that's been bothering me for years. Every definition, description, or explanation of probability I've heard is circular. It just rearranges what's already there. Part B is, I think, what I'm going for. Even given a set of possible starting conditions, and a set of possible results, and how each leads to the other, you still don't have anything resembling probability until you know what starting conditions you can expect. Part A then transforms that starting distribution into the results distribution. Part B, then. Given any original seed state (a person picking up a coin), this whole coin-flippy thing should still work. In Part A, the point is that given a bunch of similar starting points, collectively they give all kinds of results. In Part B, the question is, given any particular seed state, will the one and only string of results that follows it display the characteristics of a random series, or not? At least, I think that's the question. I'm figuring this out as I go along, so let me know if that isn't the right question. Basically, what I think it needs to eventually reduce to is something where it doesn't matter what you choose, you'll still get what you want. Black Carrot 01:31, 11 December 2006 (UTC)[reply]
Physiology tells us that the human body is very very complicated, or are we taking it out of this equation? Sethwoodworth 01:49, 11 December 2006 (UTC)[reply]
Well, yes and no. I think the question is more about the the shape of the thing than the details - with the flight and bounce of a coin, for instance, it's more useful to know that the result is chaotic than to know the exact formulae for it. Any details that would help are welcome, though. And it's possible that there's a better example than coin-flipping. Roulette, perhaps. Black Carrot 02:08, 11 December 2006 (UTC)[reply]
Let me try to summarise. If we stick to classical mechanics, coin tossing is a deterministic process. If we knew the initial conditions exactly than we also know the outcome - every point in the phase space has a single, well defined outcome. But in practice we can't know the initial conditiosne exactly, we only know them to certain degree of precision. So we only know that we are somewhere in a certain small region of phase space. And in most of the coin tossing phase space (if we avoid special cases like dropping the coin straight downwards flat from a small height) the "heads" and "tails" outcomes are mixed together so thoroughly that, no matter how small we make this region, it will contain both "heads" and "tails" outcomes - and in approximately equal measures. So no matter how precisely we try to specify the initial conditions, we cannot make any better prediction of the outcome than 50:50. Gandalf61 10:07, 11 December 2006 (UTC)[reply]
True. Very true. Well summarized. That's not what I'm getting at, though. Black Carrot 13:27, 11 December 2006 (UTC)[reply]
I don't mean to be rude, but what are you getting at? It's not obvious, to me at least, what you're actually asking here. I thought I knew what your query was, but then you said that it wasn't. Can you explain in 100 words or less exactly what you're trying to understand? Maelin (Talk | Contribs) 04:19, 13 December 2006 (UTC)[reply]

I think part of your question might be where the initial randomness comes from in any system that exhibits random behavior. As we know, a slight change in a system can cause a massive change to result. That slight change, in turn, can result from an even smaller change, and this can be traced all the way back, I imagine, to quantum uncertainty. A similar question is how the uneven distribution of mass in the universe resulted from the presumably perfectly evenly distributed Big Bang. Again, we know that any tiny ripple would ultimately lead to the formations of galaxies, but it's still unclear what would cause that ripple. This ultimately all leads to the philosophical question "how can something arise from nothing ?". I have no answer, either in a universe without God, or in a universe with one (in which case, where did He come from ?). StuRat 14:00, 11 December 2006 (UTC)[reply]

Yeah, Original Cause type things can be hard to work with, since either A) It was caused by something, or B) It wasn't, which either A) Doesn't answer the question or B) Breaks the cause-and-effect chain we rely on. Tricky. I've considered that I might be setting myself up for an endless chain of "but what made that happen," but I don't think I am. What I'm getting at is hard to put a finger on (well, more accurately, it's hard to put someone else's finger on it), but generally it's a question of how reliably random-esque behavior arises from deterministic, iterative realisitic systems. I'm afraid I can't state it any more clearly than that. Black Carrot 21:18, 12 December 2006 (UTC)[reply]
You already said that you dont understand how it outputs something with all the requisite statistical properties. Do you already understand those statistical properties, and all statistical tests which are used to measure the randomness?? But they are far from perfect, and cannot distinguish between real randomness and pseudo-randomness. -- 131.111.48.168 15:10, 13 December 2006 (UTC)[reply]
There is one elephant standing in the room which I will get back to.
A deterministic, iterative, comprehensible (and therefore the opposite of realistic, and necessarily so, since you claim you want to understand it) is a cellular automaton. Stephen Wolfram wrote about these at some length in A New Kind of Science and demonstrates examples of two-state, one-dimensional cellular automata which exhibit highly complicated, effectively random, behaviour. Dr. Wolfram goes on to propose a notion of computational irreducibility to describe a limit on predictability. The extremely over-simplified idea is this: Complex behavior can be produced by systems that have simple underlying structures.
Chaotic behaviour in continuous systems is exemplified by various familiar fractals: the Henon map (two equations, two independent variables, one dependent variable, (weak) quadratic nonlinearity), the Lorenz attractor (a very simplified weather model) (three equations, three independent variables, one dependent variable, quadratic nonlinearities). Lots of discrete and continuous examples can be found at List of chaotic maps. Discrete systems are usually iterations of a simple rule, like the Mandelbrot set. See also Chaos theory#Minimum complexity of a chaotic system.
It is "unfortunate" that chaotic behaviour can be produced by simple systems. "The Poincaré-Bendixson theorem shows that a strange attractor can only arise in a continuous dynamical system if it has three or more dimensions. However, no such restriction applies to discrete systems, which can exhibit strange attractors in two or even one dimensional systems." -- Chaos theory#Strange attractors
Finally, to the elephant in the room. There is not a sustainable definition of random. Randomness and Random sequence avoid the ontological difficulties of "random sequence". An example of the difficulty is the question: "Is HHHHHHHHHHHH (12 heads) a random sequence of flips of a coin?". The answer is yes and no. A fair coin is likely to produce this sequence for every 4096 experiments of flipping the coin twelve times. However, seeing *only* this information we may assert, with a strong probability of being correct, that the data indicates that the hypothesis "the coin is fair" is wrong. I.e., it is far more likely given only this sequence of outputs that the coin was unfair than that the coin was fair. (Inference about the experimental inputs based on the likelihood of generating the observed output is called parameter estimation and I usually apply Bayesian methods for this kind of problem.)
The point is that in order to make any progress on "the random question", we've had to be a lot more specific about what randomness is. Kolmogorov complexity, computational irreducibility, et seq. are formulations of the idea of "random sequence" that provide somewhere to stand (philosophically) and on which an edifice of knowledge may be built. If you really want to understand "random sequence", then you're going to have to be very specific about what you mean when you say "random sequence". Knuth, in his "The Art of Computer Programming, vol2: Seminumerical Algorithms, 3rd ed." (pp. 149 et seq. in my copy) goes on at some length about reasonable definitions of random sequence and how it may be shown that no realizable sequence can have those properties. So, it's useful to realize that imprecision at the front (definition of "random") may ensure that no progress is subsequently made.
Fuzzyeric 04:59, 14 December 2006 (UTC)[reply]